Professional Documents
Culture Documents
COMPUTER HARDWARE
Computer Hardware
04
06
07
04. Motherbooard
15
19
27
33
56
64
77
1|Page
Computer Hardware
What is Hardware?
Your PC (Personal Computer) is a system, consisting of many components. Some of those components, like
Windows XP, and all your other programs, are software. The stuff you can actually see and touch, and
would likely break if you threw it out a fifth-story window, is hardware.
Not everybody has exactly the same hardware. But those of you who have a desktop system, like the
example shown in Figure 1, probably have most of the components shown in that same figure. Those of you
with notebook computers probably have most of the same components. Only in your case the components
are all integrated into a single book-sized portable unit.
Figure 1
The system unit is the actual computer; everything else is called a peripheral device. Your computer's
system unit probably has at least one floppy disk drive, and one CD or DVD drive, into which you can insert
floppy disks and CDs. There's another disk drive, called the hard disk inside the system unit, as shown in
Figure 2. You can't remove that disk, or even see it. But it's there. And everything that's currently "in your
computer" is actually stored on that hard disk. (We know this because there is no place else inside the
computer where you can store information!).
2|Page
Computer Hardware
Figure 2
The floppy drive and CD drive are often referred to as drives with removable media or removable drives for
short, because you can remove whatever disk is currently in the drive, and replace it with another. Your
computer's hard disk can store as much information as tens of thousands of floppy disks, so don't worry
about running out of space on your hard disk any time soon. As a rule, you want to store everything you
create or download on your hard disk. Use the floppy disks and CDs to send copies of files through the mail,
or to make backup copies of important items.
3|Page
Computer Hardware
Figure 3
The idea is to rest your hand comfortably on the mouse, with
your index finger touching (but not pressing on) the left mouse
button. Then, as you move the mouse, the mouse pointer (the
little arrow on the screen) moves in the same direction. When
moving the mouse, try to keep the buttons aimed toward the
monitor -- don't "twist" the mouse as that just makes it all the
harder to control the position of the mouse pointer.
If you find yourself reaching too far to get the mouse pointer where you want it to be on
the screen, just pick up the mouse, move it to where it's comfortable to hold it, and place
it back down on the mousepad or desk. The buzzwords that describe how you use the
mouse are as follows:
4|Page
Computer Hardware
Point: To point to an item means to move the mouse pointer so that it's touching
the item.
Click: Point to the item, then tap (press and release) the left mouse button.
Double-click: Point to the item, and tap the left mouse button twice in rapid
succession - click-click as fast as you can.
Right-click: Point to the item, then tap the mouse button on the right.
Drag: Point to an item, then hold down the left mouse button as you move the
mouse. To drop the item, release the left mouse button.
Right-drag: Point to an item, then hold down the right mouse button as you move
the mouse. To drop the item, release the right mouse button.
5|Page
Computer Hardware
Figure 4
Most keyboards also contain a set of navigation keys. You can use the navigation keys to move around
around through text on the screen. The navigation keys won't move the mouse pointer. Only the mouse
moves the mouse pointer.
On smaller keyboards where space is limited, such as on a notebook computer, the navigation keys and
numeric keypad might be one in the same. There will be a Num Lock key on the keypad. When the Num
Lock key is "on", the numeric keypad keys type numbers. When the Num Lock key is "off", the navigation
keys come into play. The Num Lock key acts as a toggle. Which is to say, when you tap it, it switches to the
opposite state. For example, if Num Lock is on, tapping that key turns it off. If Num Lock is off, tapping that
key turns Num Lock on
6|Page
Computer Hardware
There is and has been many processors on the market, running at many different speeds. The speed is
measured in Megahertz or MHz. A single MHz is a calculation of 1 million cycles per second (or computer
instructions), so if you have a processor running at 2000 MHz, then your computer is running at
2000,000,000 cycles per second, which in more basic terms is the amount of instructions your computer can
carry out. Another important abbreviation is Gigahertz or GHz. A single GHz or 1 GHz is the same as 1000
MHz . Sounds a bit confusing, so here is a simple conversion:
1000 MHz (Megahertz) = 1GHz (Gigahertz) = 1000,000,000 Cycles per second (or computer instructions).
Now you can see why they abbreviate it, could you imagine going to a PC store and asking for a one
thousand million cycle PC please. A bit of a mouth full isnt it?
So when buying a new computer always look for fastest you can afford. The fastest on the market at the time
of writing this article is 3.8 GHz (3800 MHz). Remember though that it is not necessary to purchase such a
fast processor, balance your needs, do you really need top of the range? Especially when the difference say
between a 3.5 GHz (3500 MHz) and a 3.8 GHz (3800 MHz) processor will be barely noticed (if noticed at
all) by you, while the price difference is around 100. With the money you save you could get a nice printer
and scanner package.
Now that we have covered the speeds, there is one more important subject to cover. Which processor? There
are 3 competitors at present, the AMD Athlon, Intel Pentium and the Intel Celeron. They come in many
guises, but basically the more cores they have and the higher the speed means better and faster.
Processors now come as dual core, triple core and quad core. These processors are the equivalent of running
two cpu's (Dual core), three CPU's ( Triple core) or four (Quad core).
7|Page
Computer Hardware
In the past Intel Pentium the best and most expensive of them all, and remains today one of
the most popular on the market. In laymans terms it is/was the designer processor, although
AMD have some superb if not better releases and equally highly priced and advanced
products. It would be hard to say which is best as they are direct competitors.
Lastly there is the Intel Celeron; this processor is a budget version of the Intel Pentium 4, the
processor you find in most budget computers. If the purse is tight, and you need a computer,
then this is your port of call. You will find many sub 400 computers fitted with this
processor.
8|Page
Computer Hardware
Year
of End
Of Life
CPU families
Package
1970s
Still
available
Intel 8086
Intel 8088
DIP
40
2.54mm
5/10 MHz
Still
available
Intel 80186
Intel 80286
Intel 80386
PLCC
68,
132
1.27mm
640 MHz
Socket 1
1989
Intel 80486
PGA
169
1650 MHz
Socket 2
Intel 80486
PGA
238
1650 MHz
Socket 3
1991
Intel 80486
PGA
237
1650 MHz
Socket
name
DIP
PLCC
Year of
introduction
Pin
Pin pitch
count
Bus speed
Notes
9|Page
Computer Hardware
Socket 4
Intel Pentium
PGA
273
6066 MHz
Socket 5
Intel Pentium
AMD K5
IDT WinChip C6
IDT WinChip 2
PGA
320
5066 MHz
Socket 6
Intel 80486
PGA
235
Socket 7
1994
Intel Pentium
Intel Pentium MMX
AMD K6
PGA
321
5066 MHz
Super Socket
7
1998
AMD K6-2
AMD AMD K6-III
Rise mP6
Cyrix MII
PGA
321
66100 MHz
Socket 8
1995
PGA
387
6066 MHz
Slot 1
1997
Intel Pentium II
Slot
242
66133 MHz
Celeron (Covington,
Mendocino)
Pentium II (Klamath)
Pentium III (Katmai)all versions
Pentium III
(coppermine)
10 | P a g e
Computer Hardware
1998
Slot
330
100133 MHz
Socket 463/
Socket
NexGen
NexGen Nx586
PGA
463
Socket 499
Alpha 21164A
Slot
587
Slot A
1999
AMD Athlon
Slot
242
100 MHz
Slot B
Alpha 21264
Slot
587
Socket 370
1999
PGA
370
1.27mm[1]
66133 MHz
Socket 462/
Socket A
2000
AMD Athlon
AMD Duron
AMD Athlon XP
AMD Athlon XP-M
AMD Athlon MP
AMD Sempron
PGA
462
Slot 2
(megatransfers/second)
fsb in the later models
Socket 423
2000
Intel Pentium 4
PGA
423
Socket 478/
Socket N
2000
Intel Pentium 4
Intel Celeron
PGA
478
1mm[2]
Computer Hardware
Intel Pentium 4 EE
Intel Pentium 4 M
Socket 495
2000
Intel Celeron
PGA
495
1.27mm[4]
PAC418
2001
Intel Itanium
PGA
418
133 MHz
Socket 603
2001
Intel Xeon
PGA
603
PAC611
2002
Intel Itanium 2
HP PA-8800, PA-8900
PGA
611
Socket 604
2002
Intel Xeon
PGA
604
Socket 754
2003
AMD Athlon 64
AMD Sempron
AMD Turion 64
PGA
754
1.27mm[6]
200800 MHz
Socket 940
2003
PGA
940
1.27mm[7]
2001000 MHz
[8]
Socket 479
2003
Intel Pentium M
Intel Celeron M
PGA
Socket 939
2004
11/2008
AMD Athlon 64
AMD Athlon 64 FX
AMD Athlon 64 X2
AMD Opteron
PGA
939
1.27mm[7]
2001000 MHz
LGA 775/
Socket T
2004
Intel Pentium 4
Intel Pentium D
Intel Celeron
Intel Celeron D
Intel Pentium XE
Intel Core 2 Duo
Intel Core 2 Quad
Intel Xeon
LGA
775
1.09mm x
1.17mm[9]
1600 MHz
Socket 563
PGA
563
Socket M
2006
PGA
478
479
Support of Athlon 64
FX to 1 GHz
Support
of Opteron limited to
100-series only
Computer Hardware
166 MHz)
LGA 771/
Socket J
2006
Intel Xeon
LGA
771
Socket S1
2006
AMD Turion 64 X2
PGA
638 1.27mm[11]
200800 MHz
Socket AM2
2006
AMD Athlon 64
AMD Athlon 64 X2
PGA
940
1.27mm[7]
2001000 MHz
Socket F
2006
AMD Athlon 64 FX
AMD Opteron
LGA
1207 1.1mm[12]
Socket AM2+
2007
AMD Athlon 64
AMD Athlon X2
AMD Phenom
AMD Phenom II
PGA
940
2002600 MHz
Socket P
2007
Intel Core 2
PGA
478
Socket 441
2008
Intel Atom
PGA
441
LGA 1366/
Socket B
2008
LGA
1366
Socket AM3
2009
AMD Phenom II
AMD Athlon II
AMD Sempron
PGA
941[13] 1.27mm[7]
LGA 1156/
Socket H
2009
LGA
1156
1.09mm x
1.17mm[10]
1.27mm[7]
1600 MHz
400667 MHz
4.8-6.4 GT/s
2003200 MHz
2.5 GT/s
Computer Hardware
2010
LGA
1974
2003200 MHz
Replaces Socket F
Socket C32
2010
LGA
1207
2003200 MHz
Replaces Socket
F, Socket AM3
LGA 1248
2010
LGA
1248
4.8 GT/s
LGA 1567
2010
LGA
1567
4.8-6.4 GT/s
LGA 1155/
Socket H2
2011/Q1
LGA
1155
5 GT/s
LGA 2011/
Socket R
Future
(2011/Q3)
LGA
2011
4.8-6.4 GT/s
Socket FM1
2011
PGA
905
1.27mm
Year of
introduction
Year
of EOL
CPU families
Package
Bus speed
Notes
Socket
name
Pin
Pin pitch
count
14 | P a g e
Computer Hardware
04. Motherboard
The motherboard is the most essential component in a personal computer . it is the piece of
hardware which contains the computer's micro-processing chip and everything attached to it
is vital to making the computer run.
Motherboard Components
If you open your computer's case, the motherboard is the flat, rectangular piece of circuit
board to which everything seems to connect to for one reason or another. It contains the
following key components:
Microprocessor "socket" : defines what kind of central processing unit the motherboard
uses;
Chipset which forms the computer's logic system. It is usually composed of two parts
called bridges (a "north" bridge and its opposite, "south" bridge), which connects
theCPU to the rest of the system;
Basic Input/Output System (BIOS) chip: which controls the most basic function of a
computer, and how to repair it; and
Real-time clock which is a battery-operated chip which maintains the system's time,
and other basic functions.
The motherboard also has slots or ports for the attachment of various peripherals or support
system/hardware. There is an Accelerated Graphics Port(AGP), which is used exclusively for
video cards; Integrated Drive Electronics(IDE), which provides the interfaces for the hard
disk drives; Memory or RAM cards; and Peripheral Component Interconnect (PCI), which
provides electronic connections for video capture cards and network cards, among others.
15 | P a g e
Computer Hardware
16 | P a g e
Computer Hardware
17 | P a g e
Computer Hardware
Integrated peripherals
Block diagram of a modern motherboard, which supports many on-board peripheral functions as well
as several expansion slots.
With the steadily declining costs and size of integrated circuits, it is now possible to include support
for many peripherals on the motherboard. By combining many functions on one PCB, the physical size
and total cost of the system may be reduced; highly integrated motherboards are thus especially
popular in small form factor and budget computers.
For example, the ECS RS485M-M,[6] a typical modern budget motherboard for computers based on
AMD processors, has on-board support for a very large range of peripherals:
disk controllers for a floppy disk drive, up to 2 PATA drives, and up to 6 SATA drives (including RAID
0/1 support)
18 | P a g e
Computer Hardware
19 | P a g e
Computer Hardware
Types of Ram
20 | P a g e
Computer Hardware
Each memory point is thus characterised by an address which corresponds to a row number and a column
number. This access is not instant and the access time period is known as latency time. Consequently, time
required for access to data in the memory is equal to cycle time plus latency time.
Thus, for a DRAM memory, access time is 60 nanoseconds (35ns cycle time and 25ns latency time). On a
computer, the cycle time corresponds to the opposite of the clock frequency; for example, for a computer
with frequency of 200 MHz, cycle time is 5 ns (1/200*106)).
Consequently a computer with high frequency using memories with access time much longer than the
processor cycle time must perform wait states to access the memory. For a computer with frequency of 200
MHz using DRAM memories (and access time of 60ns), there are 11 wait states for a transfer cycle. The
computer's performance decreases as the number of wait states increases, therefore we recommend the use
of faster memories.
21 | P a g e
Computer Hardware
SIMM modules with 30 connectors (dimensions are 89x13mm) are 8-bit memories with which firstgeneration PCs were equipped (286, 386).
SIMM modules with 72 connectors (dimensions are 108x25mm) are memories able to store 32 bits
of data simultaneously. These memories are found on PCs from the 386DX to the first Pentiums. On
the latter, the processor works with a 64-bit data bus; this is why these computers must be equipped
with two SIMM modules. 30-pin modules cannot be installed on 72-connector positions because a
notch (at the centre of the connectors) would prevent it from being plugged in.
modules in DIMM format (Dual Inline Memory Module) are 64-bit memories, which explains why they
do not need pairing. DIMM modules have memory chips on both sides of the printed circuit board and
also have 84 connectors on each side, giving them a total of 168 pins. In addition to having larger
dimensions than SIMM modules (130x25mm), these modules have a second notch to avoid confusion.
22 | P a g e
Computer Hardware
It may be interesting to note that the DIMM connectors have been enhanced to make insertion easier, thanks
to levers located either side of the connector.
Smaller modules also exist; they are known as SO DIMM (Small Outline DIMM), designed for portable
computers. SO DIMM modules have only 144 pins for 64-bit memories and 77 pins for 32-bit memories.
modules in RIMM format (Rambus Inline Memory Module, also called RD-RAM or DRD-RAM) are 64bit memories developed by Rambus. They have 184 pins. These modules have two locating notches to
avoid risk of confusion with the previous modules.
Given their high transfer speed, RIMM modules have a thermal film which is supposed to improve heat
transfer.
As for DIMMs, smaller modules also exist; they are known as SO RIMM (Small Outline RIMM), designed
for portable computers. SO RIMM modules have only 160 pins.
DRAM PM
The DRAM (Dynamic RAM) is the most common type of memory at the start of this millennium. This is a
memory whose transistors are arranged in a matrix in rows and columns. A transistor, coupled with a
capacitor, gives information on a bit. Since 1 octet contains 8 bits, a DRAM memory module of 256 Mo will
thus contain 256 * 2^10 * 2^10 = 256 * 1024 * 1024 = 268,435,456 octets = 268,435,456 * 8 =
2,147,483,648 bits = 2,147,483,648 transistors. A module of 256 Mo thus has a capacity of 268,435,456
octets, or 268 Mo! These memories have access times of 60 ns.
Furthermore, access to memory generally concerns data stored consecutively in the memory. Thus burst
mode allows access to the three pieces of data following the first piece with no additional latency time. In
this burst mode, time required to access the first piece of data is equal to cycle time plus latency time, and
the time required to access the other three pieces of data is equal to just the cycle time; the four access times
are thus written in the form X-Y-Y-Y, for example 5-3-3-3 indicates a memory for which 5 clock cycles are
needed to access the first piece of data and 3 for the subsequent ones.
DRAM FPM
To speed up access to the DRAM, there is a technique, known as paging, which involves accessing data
located in the same column by changing only the address of the row, thus avoiding repetition of the column
number between reading of each row. This is known as DRAM FPM (Fast Page Mode). FPM achieves
access times of around 70 to 80 nanoseconds for operating frequency between 25 and 33 Mhz.
DRAM EDO
DRAM EDO (Extended Data Out, sometimes also called hyper-page") was introduced in 1995. The
technique used with this type of memory involves addressing the next column while reading the data in a
column. This creates an overlap of access thus saving time on each cycle. EDO memory access time is thus
around 50 to 60 nanoseconds for operating frequency between 33 and 66 Mhz.
Thus the RAM EDO, when used in burst mode, achieves 5-2-2-2 cycles, representing a gain of 4 cycles on
access to 4 pieces of data. Since the EDO memory did not work with frequencies higher than 66 Mhz, it was
abandoned in favour of the SDRAM.
23 | P a g e
Computer Hardware
SDRAM
The SDRAM (Synchronous DRAM), introduced in 1997, allows synchronised reading of data with the
mother-board bus, unlike the EDO and FPM memories (known as asynchronous) which have their own
clock. The SDRAM thus eliminates waiting times due to synchronisation with the mother-board. This
achieves a 5-1-1-1 burst mode cycle, with a gain of 3 cycles in comparison with the RAM EDO. The
SDRAM is thus able to operate with frequency up to 150 Mhz, allowing it to achieve access times of around
10 ns.
DDR-SDRAM
The DDR-SDRAM (Double Data Rate SDRAM) is a memory, based on the SDRAM technology, which
doubles the transfer rate of the SDRAM using the same frequency.
Data are read or written into memory based on a clock. Standard DRAM memories use a method known
as SDR (Single Data Rate) involving reading or writing a piece of data at each leading edge.
The DDR doubles the frequency of reading/writing, with a clock at the same frequency, by sending data to
each leading edge and to each trailing edge.
DDR memories generally have a product name such as PCXXXX where "XXXX" represents the speed in
Mo/s.
24 | P a g e
Computer Hardware
DDR2-SDRAM
DDR2 (or DDR-II) memory achieves speeds that are twice as high as those of the DDR with the same
external frequency.
QDR (Quadruple Data Rate or quad-pumped) designates the reading and writing method used. DDR2
memory in fact uses two separate channels for reading and writing, so that it is able to send or receive twice
as much data as the DDR.
DDR2 also has more connectors than the classic DDR (240 for DDR2 compared with 184 for DDR).
Summary table
The table below gives the equivalence between the mother-board frequency (FSB), the memory (RAM)
frequency and its speed:
Memory
Name
Frequency (RAM)
Frequency (FSB)
Speed
DDR200
PC1600
200 MHz
100 MHz
1.6 Go/s
DDR266
PC2100
266 MHz
133 MHz
2.1 Go/s
DDR333
PC2700
333 MHz
166 MHz
2.7 Go/s
DDR400
PC3200
400 MHz
200 MHz
3.2 Go/s
DDR433
PC3500
433 MHz
217 MHz
3.5 Go/s
DDR466
PC3700
466 MHz
233 MHz
3.7 Go/s
DDR500
PC4000
500 MHz
250 MHz
4 Go/s
DDR533
PC4200
533 MHz
266 MHz
4.2 Go/s
DDR538
PC4300
538 MHz
269 MHz
4.3 Go/s
DDR550
PC4400
550 MHz
275 MHz
4.4 Go/s
DDR2-400
PC2-3200
400 MHz
100 MHz
3.2 Go/s
DDR2-533
PC2-4300
533 MHz
133 MHz
4.3 Go/s
DDR2-667
PC2-5300
667 MHz
167 MHz
5.3 Go/s
DDR2-675
PC2-5400
675 MHz
172.5 MHz
5.4 Go/s
DDR2-800
PC2-6400
800 MHz
200 MHz
6.4 Go/s
25 | P a g e
Computer Hardware
Synchronisation (timings)
It is not unusual to see scores such as 3-2-2-2 or 2-3-3-2 to describe the parameterisation of the random
access memory. This succession of four figures describes the synchronisation of the memory (timing), i.e.
the succession of clock cycles needed to access a piece of data stored in the RAM. These four figures
generally correspond, in order, to the following values:
CAS delay or CAS latency (CAS meaning Column Address Strobe): this is the number of clock cycles
that elapse between the reading command being sent and the piece of data actually arriving. In other
words, it is the time needed to access a column.
RAS Precharge Time (known as tRP, RAS meaning Row Address Strobe): this is the number of clock
cycles between two RAS instructions, i.e. between two accesses to a row. operation.
RAS to CAS delay (sometimes called tRCD): this is the number of clock cycles corresponding to access
time from a row to a column.
RAS active time (sometimes called tRAS): this is the number of clock cycles corresponding to the time
needed to access a row.
The memory cards are equipped with a device called SPD (Serial Presence Detect), allowing the BIOS to
find out the nominal setting values defined by the manufacturer. It is an EEPROM whose data will be loaded
by the BIOS if the user chooses "auto" setting.
Error correction
Some memories have mechanisms for correcting errors to ensure the integrity of the data they contain. This
type of memory is generally used on systems working on critical data, which is why this type of memory is
found in servers.
Parity bit
Modules with parity bit ensure that the data contained in the memory are the ones required. To achieve this,
one of the bits from each octet stored in the memory is used to store the sum of the data bits. The parity bit is
1 when the sum of the data bits is an odd number and 0 in the opposite case.
Thus the modules with parity bit allow the integrity of data to be checked but do not provide for error
correction. Moreover, for 9 Mo of memory, only 8 will be used to store data since the last mega octet is used
to store the parity bits.
ECC modules
ECC (Error Correction Coding) memory modules are memories with several bits dedicated to error
correction (they are known as control bits). These modules, used mainly in servers, allow detection and
correction of errors.
Dual Channel
Some memory controllers offer a dual channel for the memory. The memory modules are used in pairs to
achieve higher bandwidth and thus make the best use of the system's capacity. When using the Dual
Channel, it is vital to use identical modules in a pair (same frequency and capacity and preferably the same
brand).
26 | P a g e
Computer Hardware
Video hardware is often integrated into the motherboard, however all modern
motherboards provide expansion ports to which a video card can be attached. In this configuration it is
sometimes referred to as a video controller or graphics controller. Modern low-end to mid-range
motherboards often include a graphics chipset manufactured by the developer of the northbridge (i.e. an
nForce chipset with Nvidia graphics or an Intel chipset with Intel graphics) on the motherboard. This
graphics chip usually has a small quantity of embedded memory and takes some of the system's main RAM,
reducing the total RAM available. This is usually called integrated graphics or on-board graphics, and is
low-performance and undesirable for those wishing to run 3D applications. A dedicated graphics card on the
other hand has its own RAM and Processor specifically for processing video images, and thus offloads this
work from the CPU and system RAM. Almost all of these motherboards allow the disabling of the
integrated graphics chip in BIOS, and have an AGP, PCI, or PCI Express slot for adding a higherperformance graphics card in place of the integrated graphics.
Components
A modern video card consists of a printed circuit board on which the components are mounted. These
include:
27 | P a g e
Computer Hardware
Video BIOS
The video BIOS or firmware contains the basic program, which is usually hidden, that governs the video
card's operations and provides the instructions that allow the computer and software to interact with the card.
It may contain information on the memory timing, operating speeds and voltages of the graphics processor,
RAM, and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked
settings for higher performance), although this is typically only done by video card overclockers and has the
potential to irreversibly damage the card.
Video memory
RAMDAC
The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts digital signals to analog
signals for use by a computer display that uses analog inputs such as CRTdisplays. The RAMDAC is a kind
of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and
the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh
rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, in order to minimize flicker.
(With LCD displays, flicker is not a problem.) Due to the growing popularity of digital computer displays
and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component.
All current LCDs, plasma displays and TVs work in the digital domain and do not require a RAMDAC.
There are few remaining legacy LCD and plasma displays that feature analog inputs (VGA,
component, SCART etc.) only. These require a RAMDAC, but they reconvert the analog signal back to
digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analogto-digital conversion.
28 | P a g e
Computer Hardware
Outputs
Video In Video Out (VIVO) for S-Video (TV-out), Digital Visual Interface (DVI) for High-definition
television (HDTV), and DB-15 for Video Graphics Array (VGA)
The most common connection systems between the video card and the computer display are:
29 | P a g e
Computer Hardware
Video In Video Out (VIVO) for S-Video, Composite video and Component
video
Included to allow the connection with televisions, DVD players, video recorders and video game consoles.
They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes
with either 4 connectors (S-Video in and out + composite video in and out), or 6 connectors (S-Video in and
out + component PB out + component PR out + component Y out [also composite out] + composite in).
An advanced digital audio/video interconnect released in 2003 and is commonly used to connect game
consoles and DVD players to a display. HDMI supports copy protection through HDCP.
DisplayPort
Display Port
An advanced license- and royalty-free digital audio/video interconnect released in 2007. Display
Port intends to replace VGA and DVI for connecting a display to a computer.
30 | P a g e
Computer Hardware
Motherboard interface
S-100 bus: designed in 1974 as a part of the Altair 8800, it was the first industry-standard bus for the
microcomputer industry.
ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It was an 8 or 16bit bus clocked at 8 MHz.
NuBus: Used in Macintosh II, it was a 32-bit bus with an average bandwidth of 10 to 20 MB/s.
MCA: Introduced in 1987 by IBM it was a 32-bit bus clocked at 10 MHz.
EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It was
a 32-bit bus clocked at 8.33 MHz.
VLB: An extension of ISA, it was a 32-bit bus clocked at 33 MHz.
PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic
connectivity between devices, avoiding the jumpers manual adjustments. It is a 32-bit bus clocked
33 MHz.
UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It had a 64-bit bus
clocked at 67 or 83 MHz.
USB: Although mostly used for miscellaneous devices, such as secondary storage devices and toys, USB
displays and display adapters exist.
AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz.
PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the
width of bus to 64-bit and the clock frequency to up to 133 MHz.
PCI Express: Abbreviated PCIe, it is a point to point interface released in 2004. In 2006 provided double
the data-transfer rate of AGP. It should not be confused with PCI-X, an enhanced version of the original
PCI specification.
In the attached table is a comparison between a selection of the features of some of those interfaces.
31 | P a g e
Computer Hardware
Bus
Style
ISA XT
4,77
Parallel
ISA AT
MCA
NUBUS
EISA
VESA
PCI
AGP 1x
AGP 2x
AGP 4x
AGP 8x
PCIe x1
PCIe x4
PCIe x8
PCIe x16
PCIe x16 2.0
16
32
32
32
32
32 - 64
32
32
32
32
1
14
18
1 16
1 16
8,33
10
10
8,33
40
33 - 100
66
66
66
66
2500 / 5000
2500 / 5000
2500 / 5000
2500 / 5000
5000 / 10000
16
20
10-40
32
160
132 - 800
264
528
1000
2000
250 / 500
1000 / 2000
2000 / 4000
4000 / 8000
8000 / 16000
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Serial
Serial
Serial
Serial
Serial
Cooling devices
Video cards may use a lot of electricity, which is converted into heat. If the heat isn't dissipated, the video
card could overheat and be damaged. Cooling devices are incorporated to transfer the heat elsewhere. Three
types of cooling devices are commonly used on video cards:
Heat sink: a heat sink is a passive-cooling device. It conducts heat away from the graphics card's core, or
memory, by using a heat-conductive metal (most commonly aluminum or copper); sometimes in
combination with heat pipes. It uses air (most common), or in extreme cooling situations, water
(see water block), to remove the heat from the card. When air is used, a fan is often used to increase
cooling effectiveness.
Computer fan: an example of an active-cooling part. It is usually used with a heat sink. Due to the
moving parts, a fan requires maintenance and possible replacement. The fan speed or actual fan can be
changed for more efficient or quieter cooling.
Water block: a water block is a heat sink suited to use water instead of air. It is mounted on the graphics
processor and has a hollow inside. Water is pumped through the water block, transferring the heat into
the water, which is then usually cooled in a radiator. This is the most effective cooling solution without
extreme modification.
Power demand
As the processing power of video cards has increased, so has their demand for electrical power. Current
high-performance video cards tend to consume a great deal of power. While CPU and power supply makers
have recently moved toward higher efficiency, power demands of GPUs have continued to rise, so the video
card may be the biggest electricity user in a computer. Although power supplies are increasing their power
too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 75 Watts. Modern
video cards with a power consumption over 75 Watts usually include a combination of six-pin (75W) or
eight-pin (150W) sockets that connect directly to the power supply
32 | P a g e
Computer Hardware
24 December 1954
Invented by
A hard disk drive (HDD; also hard drive or hard disk) is a non-volatile, random access
digital magnetic data storage device. It features rotating rigid platters on a motor-driven spindle within a
protective enclosure. Data is magnetically read from and written to the platter by read/write heads that float
on a film of air above the platters. Introduced by IBM in 1956, hard disk drives have decreased in cost and
physical size over the years while dramatically increasing in capacity.
Hard disk drives have been the dominant device for secondary storage of data in general purpose computers
since the early 1960s. They have maintained this position because advances in their recording density have
kept pace with the requirements for secondary storage. Today's HDDs operate on high-speed serial
interfaces; i.e., serial ATA (SATA) or serial attached SCSI (SAS).
History
Hard disk drives were introduced in 1956 as data storage for an IBM real time transaction processing
computer and were developed for use with general purpose mainframe and mini computers.
As the 1980s began, hard disk drives were a rare and very expensive additional feature on personal
computers (PCs); however by the late '80s, hard disk drives were standard on all but the cheapest PC.
Most hard disk drives in the early 1980s were sold to PC end users as an add on subsystem, not under the
drive manufacturer's name but by systems integrators such as the Corvus Disk System or the systems
manufacturer such as the Apple Profile. The IBM PC/XT in 1983 included an internal standard 10MB hard
disk drive, and soon thereafter internal hard disk drives proliferated on personal computers.
External hard disk drives remained popular for much longer on the Apple Macintosh. Every Mac made
between 1986 and 1998 has a SCSI port on the back, making external expansion easy; also, "toaster"
Compact Macs did not have easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive
bay at all), so on those models, external SCSI disks were the only reasonable option.
Driven by areal density doubling every two to four years since their invention, HDDs have changed in many
ways. A few highlights include:
33 | P a g e
Computer Hardware
Capacity per HDD increasing from 3.75 megabytes to greater than 1 terabyte, a greater than 270thousand-to-1 improvement.
Size of HDD decreasing from 87.9 cubic feet (a double wide refrigerator) to 0.002 cubic feet (2inch form factor, a pack of cards), a greater than 44-thousand-to-1 improvement.
Price decreasing from about $15,000 per megabyte to less than $0.0001 per megabyte ($100/1
terabyte), a greater than 150-million-to-1 improvement.
Average access time decreasing from greater than 0.1 second to a few thousandths of a second, a
greater than 40-to-1 improvement.
Market application expanding from general purpose computers to most computing applications
including consumer applications.
Technology
Magnetic recording
HDDs record data by magnetizing ferromagnetic material directionally. Sequential changes in the direction
of magnetization represent patterns of binary data bits. The data are read from the disk by detecting the
transitions in magnetization and decoding the originally written data. Different encoding schemes, such
as modified frequency modulation, group code recording, run-length limited encoding, and others are used.
A typical HDD design consists of a spindle that holds flat circular disks, also called platters, which hold the
recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or
ceramic, and are coated with a shallow layer of magnetic material typically 1020 nm in depth, with an outer
layer of carbon for protection. For reference, a standard piece of copy paper is 0.070.18 millimetre
(70,000180,000 nm).
34 | P a g e
Computer Hardware
The platters in contemporary HDDs are spun at speeds varying from 4200 rpm in energy-efficient portable
devices, to 15,000 rpm for high performance servers. The first hard drives spun at 1200 rpm , and for many
years, 3600 rpm was the norm. Information is written to, and read from a platter as it rotates past devices
called read-and-write headsthat operate very close (tens of nanometers in new drives) over the magnetic
surface. The read-and-write head is used to detect and modify the magnetization of the material immediately
under it. In modern drives there is one head for each magnetic platter surface on the spindle, mounted on a
common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the
platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm
is moved using a voice coil actuator or in some older designs a stepper motor.
The magnetic surface of each platter is conceptually divided into many small sub-micrometer-sized
magnetic regions referred to as magnetic domains. In older disk designs the regions were oriented
horizontally and parallel to the disk surface, but beginning about 2005, the orientation was changed
to perpendicular to allow for closer magnetic domain spacing. Due to the polycrystalline nature of the
magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic
grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total
forms a magnetic dipole which generates a magnetic field.
For reliable storage of data, the recording material needs to resist self-demagnetization, which occurs when
the magnetic domains repel each other. Magnetic domains written too densely together to a weakly
magnetizable material will degrade over time due to physical rotation of one or more domains to cancel out
these forces. The domains rotate sideways to a halfway position that weakens the readability of the domain
and relieves the magnetic stresses. Older hard disks used iron(III) oxide as the magnetic material, but current
disks use a cobalt-based alloy.
A write head magnetizes a region by generating a strong local magnetic field. Early HDDs used
an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic
induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data
density increased, read heads using magneto resistance (MR) came into use; the electrical resistance of the
head changed according to the strength of the magnetism from the platter. Later development made use
of spintronics; in these heads, the magneto resistive effect was much greater than in earlier types, and was
dubbed "giant" magneto resistance (GMR). In today's heads, the read and write elements are separate, but in
close proximity, on the head portion of an actuator arm. The read element is typically magneto
resistive while the write element is typically thin-film inductive.
35 | P a g e
Computer Hardware
The heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air
moves at or near the platter speed. The record and playback head are mounted on a block called a slider, and
the surface next to the platter is shaped to keep it just barely out of contact. This forms a type of air bearing.
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be
lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers,
separated by a 3-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in
opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to
allow greater recording densities is perpendicular, first shipped in 2005, and as of 2007 the technology was
used in many HDDs.
Components
HDD with disks and motor hub removed exposing copper colored stator coils surrounding a bearing in the center of
the spindle motor. Orange stripe along the side of the arm is thin printed-circuit cable, spindle bearing is in the center
and the actuator is in the upper left.
A typical hard disk drive has two electric motors; a disk motor that spins the disks and an actuator (motor)
that positions the read/write head assembly across the spinning disks.
The disk motor has an external rotor attached to the disks; the stator windings are fixed in place.
Opposite the actuator at the end of the head support arm is the read-write head (near center in photo); thin
printed-circuit cables connect the read-write heads to amplifier electronics mounted at the pivot of the
actuator. A flexible, somewhat U-shaped, ribbon cable, seen edge-on below and to the left of the actuator
arm continues the connection to the controller board on the opposite side.
The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 g.
The silver-colored structure at the upper left of the first image is the top plate of the actuator, a permanentmagnet and moving coil motor that swings the heads to the desired position (it is shown removed in the
second image). The plate supports a squat neodymium-iron-boron (NIB) high-flux magnet. Beneath this
plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is
attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the
motor (some drives only have one magnet).
36 | P a g e
Computer Hardware
A disassembled and labeled 1997 hard drive. All major components were placed on a mirror, which created the
symmetrical reflections.
The voice coil itself is shaped rather like an arrowhead, and made of doubly coated copper magnet wire. The
inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a
form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point
to the actuator bearing center) interact with the magnetic field, developing a tangential force that rotates the
actuator. Current flowing radially outward along one side of the arrowhead and radially inward on the other
produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces
that would cancel each other out. Therefore the surface of the magnet is half N pole, half S pole, with the
radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and
produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial
forces that do not rotate the head.
Error handling
Modern drives also make extensive use of Error Correcting Codes (ECCs), particularly ReedSolomon error
correction. These techniques store extra bits for each block of data that are determined by mathematical
formulas. The extra bits allow many errors to be fixed. While these extra bits take up space on the hard
drive, they allow higher recording densities to be employed, resulting in much larger storage capacity for
user data. In 2009, in the newest drives, low-density parity-check codes (LDPC) are supplanting ReedSolomon. LDPC codes enable performance close to the Shannon Limit and thus allow for the highest
storage density available.
Typical hard drives attempt to "remap" the data in a physical sector that is going bad to a spare physical
sectorhopefully while the errors in that bad sector are still few enough that the ECC can recover the data
without loss. The S.M.A.R.T. system counts the total number of errors in the entire hard drive fixed by ECC,
and the total number of remapping, in an attempt to predict hard drive failure.
Future development
Due to bit-flipping errors and other issues, perpendicular recording densities may be supplanted by other
magnetic recording technologies. Toshiba is promoting bit-patterned recording (BPR), while Xyratex are
developing heat-assisted magnetic recording (HAMR).
October 2011: TDK has developed a special laser that heats up a hard disk's surface with a precision of a
few dozen nanometers. TDK also used the new material in the magnetic head and redesigned its structure to
expand the recording density. This new technology apparently makes it possible to store one terabyte on one
platter and for the initial. TDK will produce HDD with 2 platters.
37 | P a g e
Computer Hardware
Capacity
The capacity of an HDD may appear to the end user to be a different amount than the amount stated by a
drive or system manufacturer due to amongst other things, different units of measuring capacity, capacity
consumed in formatting the drive for use by an operating system and/or redundancy.
Units of storage capacity
Advertised capacity
by manufacturer
(using decimal multiples)
With
prefix
Expected capacity
by consumers in class
action
(using binary multiples)
Diff.
Reported capacity
Windows Mac OS X 10.6+
(using
(using decimal
binary
multiples)
multiples)
Bytes
Bytes
100 MB
100,000,000
104,857,600 4.86%
95.4 MB
100.0 MB
100 GB
100,000,000,000
107,374,182,400 7.37%
93.1 GB,
95,367 MB
100.00 GB
931 GB,
1000.00 GB,
953,674 MB
1000,000 MB
And so forth.
The practice of using prefixes assigned to powers of 1000 within the hard drive and computer industries
dates back to the early days of computing. By the 1970s million, mega and M were consistently being used
in the powers of 1000 sense to describe HDD capacity. As HDD sizes grew the industry adopted the prefixes
G for giga and T for tera denoting 1,000,000,000 and 1,000,000,000,000 bytes of HDD capacity
respectively.
Likewise, the practice of using prefixes assigned to powers of 1024 within the computer industry also traces
its roots to the early days of computing By the early 1970s using the prefix K in a powers of 1024 sense to
describe memory was common within the industry. As memory sizes grew the industry adopted the prefixes
M for mega and G for giga denoting 1,048,576 and 1,073,741,824 bytes of memory respectively.
38 | P a g e
Computer Hardware
Computers do not internally represent HDD or memory capacity in powers of 1024; reporting it in this
manner is just a convention. Creating confusion, operating systems report HDD capacity in different ways.
Most operating systems, including the Microsoft Windows operating systems use the powers of
1024 convention when reporting HDD capacity, thus an HDD offered by its manufacturer as a 1 TB drive is
reported by these OS as a 931 GB HDD. Apple's current OS, beginning with Mac OS X 10.6 (Snow
Leopard), use powers of 1000when reporting HDD capacity, thereby avoiding any discrepancy between
what it reports and what the manufacturer advertises.
In the case of mega-, there is a nearly 5% difference between the powers of 1000 definition and
the powers of 1024 definition. Furthermore, the difference is compounded by 2.4% with each incrementally
larger prefix (gigabyte, terabyte, etc.) The discrepancy between the two conventions for measuring capacity
was the subject of several class action suits against HDD manufacturers. The plaintiffs argued that the use of
decimal measurements effectively misled consumers while the defendants denied any wrongdoing or
liability, asserting that their marketing and advertising complied in all respects with the law and that no
Class Member sustained any damages or injuries.
In December 1998, an international standards organization attempted to address these dual definitions of the
conventional prefixes by proposing unique binary prefixes and prefix symbols to denote multiples of 1024,
such as megabytes (MB), which exclusively denotes 220 or 1,048,576 bytes. In the over-12 years that have
since elapsed, the proposal has seen little adoption by the computer industry and the conventionally prefixed
forms of byte continue to denote slightly different values depending on context.
HDD formatting
The presentation of an HDD to its host is determined by its controller. This may differ substantially from the
drive's native interface particularly in mainframes or servers.
Modern HDDs, such as SAS and SATA drives, appear at their interfaces as a contiguous set of logical
blocks; typically 512 bytes long but the industry is in the process of changing to 4,096 byte logical blocks;
see Advanced Format.
The process of initializing these logical blocks on the physical disk platters is called low level
formatting which is usually performed at the factory and is not normally changed in the field.
High level formatting then writes the file system structures into selected logical blocks to make the
remaining logical blocks available to the host OS and its applications. The operating system file system uses
some of the disk space to organize files on the disk, recording their file names and the sequence of disk areas
that represent the file. Examples of data structures stored on disk to retrieve files include the MS DOS file
allocation table (FAT), and UNIX inodes, as well as other operating system data structures. As a
consequence not all the space on a hard drive is available for user files. This file system overhead is usually
less than 1% on drives larger than 100 MB.
Redundancy
In modern HDDs spare capacity for defect management is not included in the published capacity; however
in many early HDDs a certain number of sectors were reserved for spares, thereby reducing capacity
available to end users.
In some systems, there may be hidden partitions used for system recovery that reduce the capacity available
to the end user.
For RAID drives, data integrity and fault-tolerance requirements also reduce the realized capacity. For
example, a RAID1 drive will be about half the total capacity as a result of data mirroring. For RAID5 drives
with x drives you would lose 1/x of your space to parity. RAID drives are multiple drives that appear to be
one drive to the user, but provides some fault-tolerance. Most RAID vendors use some form
of checksums to improve data integrity at the block level. For many vendors, this involves using HDDs with
sectors of 520 bytes per sector to contain 512 bytes of user data and 8 checksum bytes or using separate 512
byte sectors for the checksum data.
Compiled by IT DIVISION, NAITA
39 | P a g e
Computer Hardware
PC hard disk drive capacity (in GB) over time. The vertical axis is logarithmic, so the fit line corresponds
to exponential growth.
Because modern disk drives appear to their interface as a contiguous set of logical blocks their gross
capacity can be calculated by multiplying the number of blocks by the size of the block. This information is
available from the manufacturers specification and from the drive itself through use of special utilities
invoking low level commands.
The gross capacity of older HDDs can be calculated by multiplying for each zone of the drive the number
of cylinders by the number of heads by the number of sectors/zone by the number of bytes/sector (most
commonly 512) and then summing the totals for all zones. Some modernATA drives will also
report cylinder, head, sector (C/H/S) values to the CPU but they are no longer actual physical parameters
since the reported numbers are constrained by historic operating-system interfaces.
The old C/H/S scheme has been replaced by logical block addressing. In some cases, to try to "force-fit" the
C/H/S scheme to large-capacity drives, the number of heads was given as 64, although no modern drive has
anywhere near 32 platters.
Form factors
40 | P a g e
Computer Hardware
Six hard drives with 8, 5.25, 3.5, 2.5, 1.8, and 1 hard disks with a ruler to show the length of platters and readwrite heads.
Mainframe and minicomputer hard disks were of widely varying dimensions, typically in free standing
cabinets the size of washing machines (e.g. HP 7935 and DEC RP06 Disk Drives) or designed so that
dimensions enabled placement in a 19" rack (e.g. Diablo Model 31). In 1962, IBM introduced its model
1311 disk, which used 14 inch (nominal size) platters. This became a standard size for mainframe and
minicomputer drives for many years,[48] but such large platters were never used with microprocessor-based
systems.
With increasing sales of microcomputers having built in floppy-disk drives (FDDs), HDDs that would fit to
the FDD mountings became desirable, and this led to the evolution of the market towards drives with
certain Form factors, initially derived from the sizes of 8-inch, 5.25-inch, and 3.5-inch floppy disk drives.
Smaller sizes than 3.5 inches have emerged as popular in the marketplace and/or been decided by various
industry groups.
41 | P a g e
Computer Hardware
as the "half height" 3 FDD, i.e., 1.63 inches high. Today, the 1-inch high ("slim line" or "low-profile")
version of this form factor is the most popular form used in most desktops.
2.5 inch: 2.75 in 0.2750.59 in 3.945 in (69.85 mm 715 mm 100 mm) = 48.895104.775 cm3
This smaller form factor was introduced by PrairieTek in 1988; there is no corresponding FDD. It is
widely used today for solid-state drives and for hard disk drives in mobile devices (laptops, music
players, etc.) and as of 2008 replacing 3.5 inch enterprise-class drives. It is also used in the PlayStation
3 and Xbox 360 video game consoles. Today, the dominant height of this form factor is 9.5 mm for
laptop drives (usually having two platters inside), but higher capacity drives have a height of 12.5 mm
(usually having three platters). Enterprise-class drives can have a height up to 15 mm. Seagate released a
7mm drive aimed at entry level laptops and high end netbooks in December 2009.
1.8 inch: 54 mm 8 mm 71 mm = 30.672 cm
This form factor, originally introduced by Integral Peripherals in 1993, has evolved into the ATA-7 LIF
with dimensions as stated. It was increasingly used in digital audio players and subnotebooks, but is
rarely used today. An original variant exists for 25GB sized HDDs that fit directly into a PC
card expansion slot. These became popular for their use in iPods and other HDD based MP3 players.
1 inch: 42.8 mm 5 mm 36.4 mm
This form factor was introduced in 1999 as IBM's Microdrive to fit inside a CF Type II slot. Samsung
calls the same form factor "1.3 inch" drive in its product literature.
0.85 inch: 24 mm 5 mm 32 mm
Toshiba announced this form factor in January 2004 for use in mobile phones and similar applications,
including SD/MMC slot compatible HDDs optimized for video storage on 4G handsets. Toshiba
currently sells a 4 GB (MK4001MTD) and 8 GB (MK8003MTD) version and holds the Guinness World
Record for the smallest hard disk drive.
Height (mm)
3.5
102
19 or 25.4
2.5
69.9
1.8
54
5 or 8
4 TB (2011)
320 GB(2009)
5.25 FH
146
47 GB (1998)
14
5.25 HH
146
19.3 GB (1998)
1.3
43
40 GB (2007)
1 (CFII/ZIF/IDE-Flex) 42
20 GB (2006)
0.85
8 GB (2004)
24
42 | P a g e
Computer Hardware
Performance characteristics
Access time
The factors that limit the time to access the data on a hard disk drive (Access time) are mostly related to the
mechanical nature of the rotating disks and moving heads. Seek time is a measure of how long it takes the
head assembly to travel to the track of the disk that contains data. Rotational latency is incurred because the
desired disk sector may not be directly under the head when data transfer is requested. These two delays are
on the order of milliseconds each. The bit rate or data transfer rate once the head is in the right position
creates delay which is a function of the number of blocks transferred; typically relatively small, but can be
quite long with the transfer of large contiguous files. Delay may also occur if the drive disks are stopped to
save energy, see Power management.
An HDD's Average Access Time is its average Seek time which technically is the time to do all possible
seeks divided by the number of all possible seeks, but in practice is determined by statistical methods or
simply approximated as the time of a seek over one-third of the number of tracks
Defragmentation is a procedure used to minimize delay in retrieving data by moving related items to
physically proximate areas on the disk. Some computer operating systems perform defragmentation
automatically. Although automatic defragmentation is intended to reduce access delays, the procedure can
slow response when performed while the computer is in use.
Access time can be improved by increasing rotational speed, thus reducing latency and/or by decreasing
seek time. Increasing areal density increases throughput by increasing data rate and by increasing the
amount of data under a set of heads, thereby potentially reducing seek activity for a given amount of data.
Based on historic trends, analysts predict a future growth in HDD areal density (and therefore capacity) of
about 40% per year. Access times have not kept up with throughput increases, which themselves have not
kept up with growth in storage capacity.
Interleave
Low-level formatting software to find highest performance interleave choice for 10 MB IBM PC XT hard disk drive.
43 | P a g e
Computer Hardware
Sector interleave is a mostly obsolete device characteristic related to access time, dating back to when
computers were too slow to be able to read large continuous streams of data. Interleaving introduced gaps
between data sectors to allow time for slow equipment to get ready to read the next block of data. Without
interleaving, the next logical sector would arrive at the read/write head before the equipment was ready,
requiring the system to wait for another complete disk revolution before reading could be performed.
However, because interleaving introduces intentional physical delays into the drive mechanism, setting the
interleave to a ratio higher than required causes unnecessary delays for equipment that has the performance
needed to read sectors more quickly. The interleaving ratio was therefore usually chosen by the end-user to
suit their particular computer system's performance capabilities when the drive was first installed in their
system.
Modern technology is capable of reading data as fast as it can be obtained from the spinning platters, so hard
drives usually have a fixed sector interleave ratio of 1:1, which is effectively no interleaving being used.
Seek time
Average seek time ranges from 3 MS for high-end server drives, to 15 MS for mobile drives, with the most
common mobile drives at about 12 MS and the most common desktop type typically being around 9 Ms.
The first HDD had an average seek time of about 600 MS and by the middle 1970s HDDs were available
with seek times of about Ms. Some early PC drives used a stepper motor to move the heads, and as a result
had seek times as slow as 80120 MS, but this was quickly improved by voice coil type actuation in the
1980s, reducing seek times to around 20 Ms. Seek time has continued to improve slowly over time.
Some desktop and laptop computer systems allow the user to make a tradeoff between seek performance and
drive noise. Faster seek rates typically require more energy usage to quickly move the heads across the
platter, causing loud noises from the pivot bearing and greater device vibrations as the heads are rapidly
accelerated during the start of the seek motion and decelerated at the end of the seek motion. Quiet operation
reduces movement speed and acceleration rates, but at a cost of reduced seek performance.
Rotational latency
Rotational speed
Latency is the delay for the rotation of the disk to bring the required disk
[rpm]
sector under the read-write mechanism. It depends on rotational speed of a
15000
disk, measured in revolutions per minute (rpm). Average rotational latency
10000
is shown in the table below, based on the empirical relation that the
average latency in milliseconds for such a drive is one-half the rotational 7200
period.
5400
Average
latency [MS]
6.25
4800
2
3
4.16
5.55
As of 2010, a typical 7200 rpm desktop hard drive has a sustained "disk-tobuffer" data transfer rate up to 1030 Mbits/sec. This rate depends on the track location, so it will be higher
for data on the outer tracks (where there are more data sectors) and lower toward the inner tracks (where
there are fewer data sectors); and is generally somewhat higher for 10,000 rpm drives. A current widely used
standard for the "buffer-to-computer" interface is 3.0 Gbit/s SATA, which can send about 300 megabyte/s
(10 bit encoding) from the buffer to the computer, and thus is still comfortably ahead of today's disk-tobuffer transfer rates. Data transfer rate (read/write) can be measured by writing a large file to disk using
special file generator tools, then reading back the file. Transfer rate can be influenced by file system
fragmentation and the layout of the files.
HDD data transfer rate depends upon the rotational speed of the platters and the data recording density.
Because heat and vibration limit rotational speed, advancing density becomes the main method to improve
sequential transfer rates. While areal density advances by increasing both the number of tracks across the
disk and the number of sectors per track, only the latter will increase the data transfer rate for a given rpm.
Since data transfer rate performance only tracks one of the two components of areal density, its performance
improves at a lower rate.
Compiled by IT DIVISION, NAITA
44 | P a g e
Computer Hardware
Power consumption
Power consumption has become increasingly important, not only in mobile devices such as laptops but also
in server and desktop markets. Increasing data center machine density has led to problems delivering
sufficient power to devices (especially for spin up), and getting rid of the waste heat subsequently produced,
as well as environmental and electrical cost concerns (see green computing). Heat dissipation is tied directly
to power consumption, and as drives age, disk failure rates increase at higher drive temperatures. Similar
issues exist for large companies with thousands of desktop PCs. Smaller form factor drives often use less
power than larger drives. One interesting development in this area is actively controlling the seek speed so
that the head arrives at its destination only just in time to read the sector, rather than arriving as quickly as
possible and then having to wait for the sector to come around (i.e. the rotational latency). Many of the hard
drive companies are now producing Green Drives that require much less power and cooling. Many of these
Green Drives spin slower (<5400 rpm compared to 7200, 10,000 or 15,000 rpm) thereby generating less
heat. Power consumption can also be reduced by parking the drive heads when the disk is not in use
reducing friction, adjusting spin speeds, and disabling internal components when not in use.
Also in systems where there might be multiple hard disk drives, there are various ways of controlling when
the hard drives spin up since the highest current is drawn at that time.
On SCSI hard disk drives, the SCSI controller can directly control spin up and spin down of the drives.
On Parallel ATA (PATA) and Serial ATA (SATA) hard disk drives, some support power-up in
standby or PUIS. The hard disk drive will not spin up until the controller or system BIOS issues a
specific command to do so. This limits the power draw or consumption upon power on.
Some SATA II hard disk drives support Spin-up Staggered, allowing the computer to spin up the drives
in sequence to reduce load on the power supply when booting.
Power management
Most hard disk drives today support some form of power management which uses a number of specific
power modes that save energy by reducing performance. When implemented an HDD will change between a
full power mode to one or more power saving modes as a function of drive usage. Recovery from the
deepest mode, typically called Sleep, may take as long as several seconds.
Audible noise
Measured in dBA, audible noise is significant for certain applications, such as DVRs, digital audio recording
and quiet computers. Low noise disks typically use fluid bearings, slower rotational speeds (usually
5400 rpm) and reduce the seek speed under load (AAM) to reduce audible clicks and crunching sounds.
Drives in smaller form factors (e.g. 2.5 inch) are often quieter than larger drives.
Shock resistance
Shock resistance is especially important for mobile devices. Some laptops now include active hard drive
protection that parks the disk heads if the machine is dropped, hopefully before impact, to offer the greatest
possible chance of survival in such an event. Maximum shock tolerance to date is 350 g for operating and
1000 g for non-operating.
45 | P a g e
Computer Hardware
46 | P a g e
Computer Hardware
Historical bit serial interfaces connect a hard disk drive (HDD) to a hard disk controller (HDC) with two
cables, one for control and one for data. (Each drive also has an additional cable for power, usually
connecting it directly to the power supply unit). The HDC provided significant functions such as
serial/parallel conversion, data separation, and track formatting, and required matching to the drive (after
formatting) in order to assure reliability. Each control cable could serve two or more drives, while a
dedicated (and smaller) data cable served each drive.
ST506 used MFM (Modified Frequency Modulation) for the data encoding method.
ST412 was available in either MFM or RLL (Run Length Limited) encoding variants.
Enhanced Small Disk Interface (ESDI) was an industry standard interface similar to ST412 supporting
higher data rates between the processor and the disk drive.
Modern bit serial interfaces connect a hard disk drive to a host bus interface adapter (today typically
integrated into the "south bridge") with one data/control cable. (As for historical bit serial interfaces above,
each drive also has an additional power cable, usually direct to the power supply unit.)
Fibre Channel (FC) is a successor to parallel SCSI interface on enterprise market. It is a serial protocol.
In disk drives usually the Fibre Channel Arbitrated Loop (FC-AL) connection topology is used. FC has
much broader usage than mere disk interfaces, and it is the cornerstone of storage area networks (SANs).
Recently other protocols for this field, like iSCSI and ATA over Ethernet have been developed as well.
Confusingly, drives usually use copper twisted-pair cables for Fibre Channel, not fibre optics. The latter
are traditionally reserved for larger devices, such as servers or disk array controllers.
Serial ATA (SATA). The SATA data cable has one data pair for differential transmission of data to the
device, and one pair for differential receiving from the device, just like EIA-422. That requires that data
be transmitted serially. A similar differential signaling system is used in RS485, Local Talk, USB, Fire
wire, and differential SCSI.
Serial Attached SCSI (SAS). The SAS is a new generation serial communication protocol for devices
designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses a
mechanically identical data and power connector to standard 3.5-inch SATA1/SATA2 HDDs, and many
server-oriented SAS RAID controllers are also capable of addressing SATA hard drives. SAS uses serial
communication instead of the parallel method found in traditional SCSI devices but still uses SCSI
commands.
47 | P a g e
Computer Hardware
Inner view of a 1998 Seagate hard disk drive which used Parallel ATA interface
Word serial interfaces connect a hard disk drive to a host bus adapter (today typically integrated into the
"south bridge") with one cable for combined data/control. (As for all bit serial interfaces above, each drive
also has an additional power cable, usually direct to the power supply unit.) The earliest versions of these
interfaces typically had a 8 bit parallel data transfer to/from the drive, but 16-bit versions became much
more common, and there are 32 bit versions. Modern variants have serial data transfer. The word nature of
data transfer makes the design of a host bus adapter significantly simpler than that of the precursor HDD
controller.
Integrated Drive Electronics (IDE), later renamed to ATA, with the alias P-ATA or PATA ("parallel
ATA") retroactively added upon introduction of the new variant Serial ATA. The original name reflected
the innovative integration of HDD controller with HDD itself, which was not found in earlier disks.
Moving the HDD controller from the interface card to the disk drive helped to standardize interfaces,
and to reduce the cost and complexity. The 40-pin IDE/ATA connection transfers 16 bits of data at a
time on the data cable. The data cable was originally 40-conductor, but later higher speed requirements
for data transfer to and from the hard drive led to an "ultra DMA" mode, known as UDMA.
Progressively swifter versions of this standard ultimately added the requirement for a 80-conductor
variant of the same cable, where half of the conductors provides grounding necessary for enhanced highspeed signal quality by reducing cross talk. The interface for 80-conductor only has 39 pins, the missing
pin acting as a key to prevent incorrect insertion of the connector to an incompatible socket, a common
cause of disk and controller damage.
EIDE was an unofficial update (by Western Digital) to the original IDE standard, with the key
improvement being the use of direct memory access (DMA) to transfer data between the disk and the
computer without the involvement of the CPU, an improvement later adopted by the official ATA
standards. By directly transferring data between memory and disk, DMA eliminates the need for the
CPU to copy byte per byte, therefore allowing it to process other tasks while the data transfer occurs.
Small Computer System Interface (SCSI), originally named SASI for Shugart Associates System
Interface, was an early competitor of ESDI. SCSI disks were standard on servers,
workstations, Commodore Amiga, and Apple Macintosh computers through the mid-1990s, by which
time most models had been transitioned to IDE (and later, SATA) family disks. Only in 2005 did the
capacity of SCSI disks fall behind IDE disk technology, though the highest-performance disks are still
available in SCSI, SAS and Fibre Channel only. The range limitations of the data cable allows for
external SCSI devices. Originally SCSI data cables used single ended (common mode) data
transmission, but server class SCSI could use differential transmission, either low voltage
differential (LVD) or high voltage differential (HVD). ("Low" and "High" voltages for differential SCSI
are relative to SCSI standards and do not meet the meaning of low voltage and high voltage as used in
general electrical engineering contexts, as apply e.g. to statutory electrical codes; both LVD and HVD
use low voltage signals (3.3 V and 5 V respectively) in general terminology.)
48 | P a g e
Computer Hardware
Acronym or
abbreviation
Meaning
Description
SASI
Shugart Associates
System Interface
SCSI
Small Computer
System Interface
SAS
Serial Attached
SCSI
ST-506
ST-412
ESDI
Enhanced Small
Disk Interface
ATA (PATA)
Advanced
Technology
Attachment
SATA
Serial ATA
49 | P a g e
Computer Hardware
Integrity
Due to the extremely close spacing between the heads and the disk surface, hard disk drives are vulnerable
to being damaged by a head crasha failure of the disk in which the head scrapes across the platter surface,
often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic
failure, a sudden power failure, physical shock, contamination of the drive's internal enclosure, wear and
tear, corrosion, or poorly manufactured platters and heads.
The HDD's spindle system relies on air pressure inside the disk enclosure to support the heads at their
proper flying height while the disk rotates. Hard disk drives require a certain range of air pressures in order
to operate properly. The connection to the external environment and pressure occurs through a small hole in
the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the breather filter). If the air
pressure is too low, then there is not enough lift for the flying head, so the head gets too close to the disk,
and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are
needed for reliable high-altitude operation, above about 3,000 m (10,000 feet). Modern disks include
temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on
all disk drivesthey usually have a sticker next to them, warning the user not to cover the holes. The air
inside the operating drive is constantly moving too, being swept in motion by friction with the spinning
platters. This air passes through an internal recirculation (or "recirc") filter to remove any leftover
contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure,
and any particles or outgassing generated internally in normal operation. Very high humidity for extended
periods can corrode the heads and platters.
For giant magneto resistive (GMR) heads in particular, a minor head crash from contamination (that does
not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction
with the disk surface, and can render the data unreadable for a short period until the head temperature
stabilizes (so called "thermal asperity", a problem which can partially be dealt with by proper electronic
filtering of the read signal).
50 | P a g e
Computer Hardware
Head stack with an actuator coil on the left and read/write heads on the right
The hard drive's electronics control the movement of the actuator and the rotation of the disk, and perform
reads and writes on demand from the disk controller. Feedback of the drive electronics is accomplished by
means of special segments of the disk dedicated to servo feedback. These are either complete concentric
circles (in the case of dedicated servo technology), or segments interspersed with real data (in the case of
embedded servo technology). The servo feedback optimizes the signal to noise ratio of the GMR sensors by
adjusting the voice-coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern disk
firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors
of the media which have failed.
During normal operation heads in HDDs fly above the data recorded on the disks. Modern HDDs prevent
power interruptions or other malfunctions from landing its heads in the data zone by either physically
moving (parking) the heads to a special landing zone on the platters that is not used for data storage, or by
physically locking the heads in a suspended (unloaded) position raised off the platters. Some early PC
HDDs did not park the heads automatically when power was prematurely disconnected and the heads would
land on data. In some other early units the user manually parked the heads by running a program to park the
HDD's heads.
51 | P a g e
Computer Hardware
Landing zones
A landing zone is an area of the platter usually near its inner diameter (ID), where no data are stored. This
area is called the Contact Start/Stop (CSS) zone. Disks are designed such that either a spring or, more
recently, rotational inertia in the platters is used to park the heads in the case of unexpected power loss. In
this case, the spindle motor temporarily acts as a generator, providing power to the actuator.
Spring tension from the head mounting constantly pushes the heads towards the platter. While the disk is
spinning, the heads are supported by an air bearing and experience no physical contact or wear. In CSS
drives the sliders carrying the head sensors (often also just called heads) are designed to survive a number of
landings and takeoffs from the media surface, though wear and tear on these microscopic components
eventually takes its toll. Most manufacturers design the sliders to survive 50,000 contact cycles before the
chance of damage on startup rises above 50%. However, the decay rate is not linear: when a disk is younger
and has had fewer start-stop cycles, it has a better chance of surviving the next startup than an older, highermileage disk (as the head literally drags along the disk's surface until the air bearing is established). For
example, the Seagate Barracuda 7200.10 series of desktop hard disks are rated to 50,000 start-stop cycles, in
other words no failures attributed to the head-platter interface were seen before at least 50,000 start-stop
cycles during testing.
Around 1995 IBM pioneered a technology where a landing zone on the disk is made by a precision laser
process (Laser Zone Texture = LZT) producing an array of smooth nanometer-scale "bumps" in a landing
zone, thus vastly improving stiction and wear performance. This technology is still largely in use today
(2008), predominantly in desktop and enterprise (3.5 inch) drives. In general, CSS technology can be prone
to increased stiction (the tendency for the heads to stick to the platter surface), e.g. as a consequence of
increased humidity. Excessive stiction can cause physical damage to the platter and slider or spindle motor.
Unloading
Load/Unload technology relies on the heads being lifted off the platters into a safe location, thus
eliminating the risks of wear and stiction altogether. The first HDD RAMAC and most early disk drives
used complex mechanisms to load and unload the heads. Modern HDDs use ramp loading, first introduced
by Memorex in 1967, to load/unload onto plastic "ramps" near the outer disk edge.
All HDDs today still use one of these two technologies listed above. Each has a list of advantages and
drawbacks in terms of loss of storage area on the disk, relative difficulty of mechanical tolerance control,
non-operating shock robustness, cost of implementation, etc.
Addressing shock robustness, IBM also created a technology for their ThinkPad line of laptop computers
called the Active Protection System. When a sudden, sharp movement is detected by the builtin accelerometer in the ThinkPad, internal hard disk heads automatically unload themselves to reduce the
risk of any potential data loss or scratch defects. Apple later also utilized this technology in
their PowerBook, iBook, MacBook Pro, and MacBook line, known as the Sudden Motion Sensor. Sony, HP
with their HP 3D Drive Guard and Toshiba have released similar technology in their notebook computers.
52 | P a g e
Computer Hardware
device. Failure to do so can lead to the loss of data. While it may sometimes be possible to recover lost
information, it is normally an extremely costly procedure, and it is not possible to guarantee success. A 2007
study published by Google suggested very little correlation between failure rates and either high temperature
or activity level; however, the correlation between manufacturer/model and failure rate was relatively
strong. Statistics in this matter is kept highly secret by most entities. Google did not publish the
manufacturer's names along with their respective failure rates, though they have since revealed that they use
Hitachi Deskstar drives in some of their servers. While several S.M.A.R.T. parameters have an impact on
failure probability, a large fraction of failed drives do not produce predictive S.M.A.R.T.
parameters. S.M.A.R.T. parameters alone may not be useful for predicting individual drive failures.
A common misconception is that a colder hard drive will last longer than a hotter hard drive. The Google
study seems to imply the reverse"lower temperatures are associated with higher failure rates". Hard drives
with S.M.A.R.T.-reported average temperatures below 27 C (80.6 F) had higher failure rates than hard
drives with the highest reported average temperature of 50 C (122 F), failure rates at least twice as high as
the optimum S.M.A.R.T.-reported temperature range of 36 C (96.8 F) to 47 C (116.6 F).
SCSI, SAS, and FC drives are typically more expensive and are traditionally used in servers and disk arrays,
whereas inexpensive ATA and SATA drives evolved in the home computer market and were perceived to be
less reliable. This distinction is now becoming blurred.
The mean time between failures (MTBF) of SATA drives is usually about 600,000 hours (some drives such
as Western Digital Raptor have rated 1.4 million hours MTBF), while SCSI drives are rated for upwards of
1.5 million hours. However, independent research indicates that MTBF is not a reliable estimate of a drive's
longevity. MTBF is conducted in laboratory environments in test chambers and is an important metric to
determine the quality of a disk drive before it enters high volume production. Once the drive product is in
production, the more valid metric is annualized failure rate (AFR). AFR is the percentage of real-world drive
failures after shipping.
SAS drives are comparable to SCSI drives, with high MTBF and high reliability.
Enterprise S-ATA drives designed and produced for enterprise markets, unlike standard S-ATA drives, have
reliability comparable to other enterprise class drives.
Typically enterprise drives (all enterprise drives, including SCSI, SAS, enterprise SATA, and FC)
experience between 0.70%0.78% annual failure rates from the total installed drives.
Eventually all mechanical hard disk drives fail, so to mitigate loss of data, some form of redundancy is
needed, such as RAID or a regular backup system.
53 | P a g e
Computer Hardware
external USB 2.0 drive (front).
External removable hard disk drives offer independence from system integration, establishing
communication via connectivity options, such as USB.
Plug and play drive functionality offers system compatibility, and features large volume data storage
options, but maintains a portable design.
These drives with an ability to function and be removed simplistically, have had further applications due
their flexibility. These include:
Disk cloning
Data storage
Data recovery
Booting operating systems (e.g. Linux, Windows, Windows To Go a.k.a. Live USB)
External hard disk drives are available in two main sizes (physical size), 2.5" and 3.5".Features such as
biometric security or multiple interfaces are available at a higher cost.
Market segments
54 | P a g e
Computer Hardware
Mobile HDDs or laptop HDDs, smaller than their desktop and enterprise counterparts, tend to be
slower and have lower capacity. A typical mobile HDD spins at either 4200 rpm, 5200 rpm,
5400 rpm, or 7200 rpm, with 5400 rpm being the most prominent. 7200 rpm drives tend to be
more expensive and have smaller capacities, while 4200 rpm models usually have very high
storage capacities. Because of smaller platter(s), mobile HDDs generally have lower capacity
drive icon
cover removed
Sales
Worldwide revenue from shipments of HDDs is expected to reach $27.7 billion in 2010, up 18.4% from
$23.4 billion in 2009 corresponding to a 2010 unit shipment forecast of 674.6 million compared to
549.5 million units in 2009.
Icons
HDDs are traditionally symbolized as a stylized stack of platters or as a cylinder and are found in diagrams
or on lights to indicate hard drive access.
In most modern operating systems, hard drives are represented by an illustration or photograph of the drive
enclosure, as shown in the examples below.
Manufacturers
More than 200 companies have manufactured hard disk drives over time. As of December 2010,
most hard drives are made by:
55 | P a g e
Computer Hardware
A CD/DVD-ROM Drive
History
The first laser disk, demonstrated in 1972, was the Laser vision 12-inch video disk. The video signal was
stored as an analog format like a video cassette. The first digitally recorded optical disc was a 5-inch audio
compact disc (CD) in a read-only format created by Philips and Sony in 1975. Five years later, the same two
companies introduced a digital storage solution for computers using this same CD size called a CD-ROM.
Not until 1987 did Sony demonstrate the erasable and rewritable 5.25-inch optical drive.
56 | P a g e
Computer Hardware
Key components
Laser and optics
The most important part of an optical disc drive is an optical path, placed in a pickup head (PUH),usually
consisting of semiconductor laser, a lens for guiding the laser beam, and photodiodes detecting the light
reflection from disc's surface
Initially, CD lasers with a wavelength of 780 nm were used, being within infrared range. For DVDs, the
wavelength was reduced to 650 nm (red color), and the wavelength for Blu-ray Disc was reduced to 405 nm
(violet color).
Two main servomechanisms are used, the first one to maintain a correct distance between lens and disc, and
ensure the laser beam is focused on a small laser spot on the disc. The second servo moves a head along the
disc's radius, keeping the beam on a groove, a continuous spiral data path.
On read only media (ROM), during the manufacturing process the groove, made of pits, is pressed on a flat
surface, called land. Because the depth of the pits is approximately one-quarter to one-sixth of the laser's
wavelength, the reflected beam's phase is shifted in relation to the incoming reading beam, causing mutual
destructive interference and reducing the reflected beam's intensity. This is detected by photodiodes that
output electrical signals.
A recorder encodes (or burns) data onto a recordable CD-R, DVD-R, DVD+R, or BD-R disc (called a blank)
by selectively heating parts of an organic dye layer with a laser[citation needed]. This changes the
reflectivity of the dye, thereby creating marks that can be read like the pits and lands on pressed discs. For
recordable discs, the process is permanent and the media can be written to only once. While the reading laser
is usually not stronger than 5 mW, the writing laser is considerably more powerful. The higher writing
speed, the less time a laser has to heat a point on the media, thus its power has to increase proportionally.
DVD burners' lasers often peak at about 200 mW, either in continuous wave and pulses, although some have
been driven up to 400 mW before the diode fails.
For rewritable CD-RW, DVD-RW, DVD+RW, DVD-RAM, or BD-RE media, the laser is used to melt a
crystalline metal alloy in the recording layer of the disc. Depending on the amount of power applied, the
substance may be allowed to melt back (change the phase back) into crystalline form or left in an amorphous
form, enabling marks of varying reflectivity to be created.
Double-sided media may be used, but they are not easily accessed with a standard drive, as they must be
physically turned over to access the data on the other side.
Double layer (DL) media have two independent data layers separated by a semi-reflective layer. Both layers
are accessible from the same side, but require the optics to change the laser's focus. Traditional single layer
(SL) writable media are produced with a spiral groove molded in the protective polycarbonate layer (not in
the data recording layer), to lead and synchronize the speed of recording head. Double-layered writable
media have: a first polycarbonate layer with a (shallow) groove, a first data layer, a semi-reflective layer, a
second (spacer) polycarbonate layer with another (deep) groove, and a second data layer. The first groove
spiral usually starts on the inner edge and extends outwards, while the second groove starts on the outer edge
and extends inwards.
Some drives support Hewlett-Packard's LightScribe photothermal printing technology for labeling specially
coated discs.
57 | P a g e
Computer Hardware
Rotational mechanism
* Some CD-R(W) and DVD-R(W)/DVD+R(W) recorders operate in ZCLV, CAA or CAV modes.
Optical drives' rotational mechanism differs considerably from hard disk drives', in that the latter keep
a constant angular velocity (CAV), in other words a constant number of revolutions per minute (RPM). With
CAV, a higher throughput is generally achievable at an outer disc area, as compared to inner area.
On the other hand, optical drives were developed with an assumption of achieving a constant throughput, in
CD drives initially equal to 150KiB/s. It was a feature important for streaming audio data that always tend to
require a constant bit rate. But to ensure no disc capacity is wasted, a head had to transfer data at a
maximum linear rate at all times too, without slowing on the outer rim of disc. This had led to optical
drivesuntil recentlyoperating with a constant linear velocity (CLV). The spiral groove of the disc
passed under its head at a constant speed. Of course the implication of CLV, as opposed to CAV, is that
disc angular velocity is no longer constant, and spindle motor need to be designed to vary speed between
200 RPM on the outer rim and 500 RPM on the inner rim.
Later CD drives kept the CLV paradigm, but evolved to achieve higher rotational speeds, popularly
described in multiples of a base speed. As a result, a 4 drive, for instance, would rotate at
800-2000
RPM
, while transferring data steadily at 600 KB/s, which is equal to 4 150 KB/s.
For DVD base speed, or "1 speed", is 1.385 MB/s, equal to 1.32 MB/s, approximately 9 times faster than
the CD base speed. For Blu-ray drives base speed is 6.74 MB/s, equal to 6.43 MB/s.
There are mechanical limits to how quickly a disc can be spun. Beyond a certain rate of rotation, around
10000 RPM, centrifugal stress can cause the disc plastic to creep and possibly shatter. On the outer edge of
the CD disc, 10000 RPM limitation roughly equals to 52 speed, but on the inner edge only to 20. Some
drives further lower their maximum read speed to around 40 on the reasoning that blank discs will be clear
of structural damage, but that discs inserted for reading may not be. Without higher rotational speeds,
increased read performance may be attainable by simultaneously reading more than one point of a data
groove, but drives with such mechanisms are more expensive, less compatible, and very uncommon.
58 | P a g e
Computer Hardware
Because keeping a constant transfer rate for the whole disc is not so important in most contemporary CD
uses, to keep the rotational speed of the disc safely low while maximizing data rate, a pure CLV approach
needed to be abandoned. Some drives work in partial CLV (PCLV) scheme, by switching from CLV to
CAV only when a rotational limit is reached. But switching to CAV requires considerable changes in
hardware design, so instead most drives use the zoned constant linear velocity (Z-CLV) scheme. This
divides the disc into several zones, each having its own different constant linear velocity. A Z-CLV recorder
rated at "52", for example, would write at 52 on the innermost zone and then progressively decrease the
speed in several discrete steps down to 20 at the outer rim.
Loading mechanisms
Current optical drives use either a tray-loading mechanism, where the disc is loaded onto a motorised or
manually operated tray, or a slot-loading mechanism, where the disc is slid into a slot and drawn in by
motorized rollers. Slot-loading drives have the disadvantage that they cannot usually accept the smaller
80 mm discs or any non-standard sizes; however, the Wii and PlayStation 3 video game consoles seem to
have defeated this problem, for they are able to load standard size DVDs and 80 mm discs in the same slotloading drive.
A small number of drive models, mostly compact portable units, have a top-loading mechanism where the
drive lid is opened upwards and the disc is placed directly onto the spindle (for example, all PlayStation 1
consoles, portable CD players, and some standalone CD recorders all feature top-loading drives).
These sometimes have the advantage of using spring-loaded ball bearings to hold the disc in place,
minimizing damage to the disc if the drive is moved while it is spun up.
Some early CD-ROM drives used a mechanism where CDs had to be inserted into
special cartridges or caddies, somewhat similar in appearance to a 3.5" floppy diskette. This was intended to
protect the disc from accidental damage by enclosing it in a tougher plastic casing, but did not gain wide
acceptance due to the additional cost and compatibility concernssuch drives would also inconveniently
require "bare" discs to be manually inserted into an openable caddy before use.
59 | P a g e
Computer Hardware
Computer interfaces
Digital audio output, analog audio output, and parallel ATA interface.
Most internal drives for personal computers, servers and workstations are designed to fit in a standard
5.25" drive bay and connect to their host via an ATA or SATA interface. Additionally, there may be digital
and analog outputs for Red Book audio. The outputs may be connected via a header cable to the sound card
or the motherboard. At one time, computer software resembling cd players controlled playback of the CD.
Today the information is extracted from the disc as data, to be played back or converted to other file
formats.
External drives usually have USB or FireWire interfaces. Some portable versions for laptop use power
themselves off batteries or off their interface bus.
Drives with SCSI interface were made, but they are less common and tend to be more expensive, because of
the cost of their interface chipsets, more complex SCSI connectors, and small volume of sales.
When the optical disc drive was first developed, it was not easy to add to computer systems. Some
computers such as the IBM PS/2 were standardizing on the 3.5" floppy and 3.5" hard disk, and did not
include a place for a large internal device. Also IBM PCs and clones at first only included a
single ATA drive interface, which by the time the CDROM was introduced, was already being used to
support two hard drives. Early laptops simply had no built-in high-speed interface for supporting an external
storage device.
This was solved through several techniques:
Early sound cards could include a second ATA interface, though it was often limited to supporting a
single optical drive and no hard drives. This evolved into the modern second ATA interface included as
standard equipment
A parallel port external drive was developed that connected between a printer and the computer. This
was slow but an option for laptops
A PCMCIA optical drive interface was also developed for laptops
A SCSI card could be installed in desktop PCs for an external SCSI drive enclosure, though SCSI was
typically much more expensive than other options
60 | P a g e
Computer Hardware
Compatibility
Most optical drives are backwards compatible with their ancestors up to CD, although this is not required by
standards.
Compared to a CD's 1.2 mm layer of polycarbonate, a DVD's laser beam only has to penetrate 0.6 mm in
order to reach the recording surface. This allows a DVD drive to focus the beam on a smaller spot size and
to read smaller pits. DVD lens supports a different focus for CD or DVD media with same laser.
Pressed CDCD
R
CDRW
Pressed DVDDVDDVD+R
DVD+R
DVD+RW
DVD
R
RW
DL
Pressed
CAT
BD
BDR
BD- BD-R
RE
DL
BDRE
DL
Audio CD player
Read
None None
None
None
None
None
CD-ROM drive
Read
None None
None
None
None
None
CD-R recorder
Read
None None
None
None
None
None
CD-RW recorder
Read
None None
None
None
None
None
DVD-ROM drive
Read
Read
Read
Read
Read
Read
None
DVD-R recorder
Read
Write Read
Read
Read
Read
None
DVD-RW recorder
Read
Write Read
Write Read
Read
None
DVD+R recorder
Read
Read
Write
Read
Read
Read
None
DVD+RW recorder
Read
Read
Write
Read
Write
Read
None
DVDRW recorder
Read
Write Write
Write Write
Read
None
DVDRW/DVD+R
DL recorder
Read
Write Write
Write Write
Write
None
BD-ROM
Read
Read
Read
Read
Read
Read
BD-R recorder
Read
Write Write
Write Write
Write
Read
BD-RE recorder
Read
Write Write
Write Write
Write
Read
BD-R DL recorder
Read
Write Write
Write Write
Write
Read
Write Write
Write Write
Write
Read
Read
Some types of CD-R media with less-reflective dyes may cause problems.
A large-scale compatibility test conducted by cdrinfo.com in July 2003 found DVD-R discs playable by
96.74%, DVD+R by 87.32%, DVD-RW by 87.68% and DVD+RW by 86.96% of consumer DVD
players and DVD-ROM drives.
Read compatibility with existing DVD drives may vary greatly with the brand of DVD+R DL media
used.
Recorder firmware may blacklist or otherwise refuse to record to some brands of DVD-RW media.
As of April 2005, all DVD+R DL recorders on the market are Super Multi-capable.
As of October 2006, recently released BD drives are able to read and write CD media.
61 | P a g e
Computer Hardware
Recording performance
Optical recorder drives are often marked with three different speed ratings. In these cases, the first speed is
for write-once (R) operations, second for re-write (RW or RE) operations, and one for read-only (ROM)
operations. For example a 12/10/32 CD drive is capable of writing to CD-R discs at 12 speed (1.76
MB/s), write to CD-RW discs at 10 speed (1.46 MB/s), and read from any CD discs at 32 speed (4.69
MB/s).
In the late 1990s, buffer under runs became a very common problem as high-speed CD recorders began to
appear in home and office computers, whichfor a variety of reasonsoften could not muster the I/O
performance to keep the data stream to the recorder steadily fed. The recorder, should it run short, would be
forced to halt the recording process, leaving a truncated track that usually renders the disc useless.
In response, manufacturers of CD recorders began shipping drives with "buffer under run protection" (under
various trade names, such as Sanyo's "BURN-Proof", Ricoh's "Just Link" and Yamaha's "Lossless Link").
These can suspend and resume the recording process in such a way that the gap the stoppage produces can
be dealt with by the error-correcting logic built into CD players and CD-ROM drives. The first of these
drives were rated at 12 and 16.
Recording schemes
CD recording on personal computers was originally a batch-oriented task in that it required
specialized authoring software to create an "image" of the data to record, and to record it to disc in the one
session. This was acceptable for archival purposes, but limited the general convenience of CD-R and CDRW discs as a removable storage medium.
Packet writing is a scheme in which the recorder writes incrementally to disc in short bursts, or packets.
Sequential packet writing fills the disc with packets from bottom up. To make it readable in CD-ROM and
DVD-ROM drives, the disc can be closed at any time by writing a final table-of-contents to the start of the
disc; thereafter, the disc cannot be packet-written any further. Packet writing, together with support from
the operating system and a file system like UDF, can be used to mimic random write-access as in media like
flash memory and magnetic disks.
Fixed-length packet writing (on CD-RW and DVD-RW media) divides up the disc into padded, fixed-size
packets. The padding reduces the capacity of the disc, but allows the recorder to start and stop recording on
an individual packet without affecting its neighbours. These resemble the block-writable access offered by
magnetic media closely enough that many conventional file systems will work as-is. Such discs, however,
are not readable in most CD-ROM and DVD-ROM drives or on most operating systems without additional
third-party drivers.
The DVD+RW disc format goes further by embedding more accurate timing hints in the data groove of the
disc and allowing individual data blocks to be replaced without affecting backwards compatibility (a feature
dubbed "lossless linking"). The format itself was designed to deal with discontinuous recording because it
was expected to be widely used in digital video recorders. Many such DVRs use variable-rate video
compression schemes which require them to record in short bursts; some allow simultaneous playback and
recording by alternating quickly between recording to the tail of the disc whilst reading from elsewhere.
Mount Rainier aims to make packet-written CD-RW and DVD+RW discs as convenient to use as that of
removable magnetic media by having the firmware format new discs in the background and manage media
Compiled by IT DIVISION, NAITA
62 | P a g e
Computer Hardware
defects (by automatically mapping parts of the disc which have been worn out by erase cycles to reserve
space elsewhere on the disc). As of February 2007, support for Mount Rainier is natively supported
in Windows Vista. All previous versions of Windows require a third-party solution, as does Mac OS X.
63 | P a g e
Computer Hardware
64 | P a g e
Computer Hardware
PSU A has a peak rating of 550 watts at 25C, with 25 amps (300 W) on the 12 volt line, and
PSU B has a continuous rating of 450 watts at 40C, with 33 amps (400 W) on the 12 volt line,
And if those ratings are accurate, then PSU B would have to be considered a vastly superior unit, despite its
lower overall power rating. PSU A may only be capable of delivering a fraction of its rated power under real
world conditions.
This tendency has led in turn to greatly over specified power supply recommendations, and a shortage of
high-quality power supplies with reasonable capacities. Simple, general purpose computers rarely require
more than 300350 watts maximum.[1] Higher end computers such as servers and gaming machines with
multiple high power GPUs are among the few exceptions.
Appearance
Most computer power supplies are a square metal box, and have a large bundle of wires emerging from one
end. Opposite the wire bundle is the back face of the power supply, with an air vent and an IEC 60320 C14
connector to supply AC power. There may optionally be a power switch and/or a voltage selector switch. A
label on one side of the box lists technical information about the power supply, including safety
certifications maximum output power. Common certification marks for safety are the UL mark, GS mark,
TV, NEMKO, SEMKO, DEMKO, FIMKO, CCC, CSA, VDE, GOST R and BSMI. Common certificate
marks for EMI/RFI are the CE mark, FCC and C-tick. The CE mark is required for power supplies sold in
Europe and India.
A RoHS or 80 PLUS can also sometimes be seen.
Dimensions of an ATX power supply are 150 mm width, 86 mm height, and typically 140 mm depth,
although the depth can vary from brand to brand.
65 | P a g e
Computer Hardware
Connectors
Typically, power supplies have the following connectors (all are Molex (USA) Inc Mini-Fit Jr, unless
otherwise indicated):
PC Main power connector (usually called P1): This is the connector that goes to the motherboard to provide
it with power. The connector has 20 or 24 pins. One of the pins belongs to the PS-ON wire (it is usually
green). This connector is the largest of all the connectors. In older AT power supplies, this connector was
split in two: P8 and P9. A power supply with a 24-pin connector can be used on a motherboard with a 20-pin
connector. In cases where the motherboard has a 24-pin connector, some power supplies come with two
connectors (one with 20-pin and other with 4-pin) which can be used together to form the 24-pin connector.
ATX12V 4-pin power connector (also called the P4 power connector). A second connector that goes to the
motherboard (in addition to the main 24-pin connector) to supply dedicated power for the processor. For
high-end motherboards and processors, more power is required, therefore EPS12V has an 8 pin connector.
4-pin Peripheral power connectors: These are the other, smaller connectors that go to the various disk drives
of the computer. Most of them have four wires: two black, one red, and one yellow. Unlike the standard
mains electrical wire color-coding, each black wire is a ground, the red wire is +5 V, and the yellow wire is
+12 V. In some cases these are also used to provide additional power to PCI cards such as FireWire 800
cards.
4-pin Molex (Japan) Ltd power connectors (usually called Mini-connector or "mini-Molex"): This is one of
the smallest connectors that supplies the floppy drive with power. In some cases, it can be used as an
auxiliary connector for AGP video cards. Its cable configuration is similar to the Peripheral connector.
Auxiliary power connectors: There are several types of auxiliary connectors designed to provide additional
power if it is needed.
Serial ATA power connectors: a 15-pin connector for components which use SATA power plugs. This
connector supplies power at three different voltages: +3.3, +5, and +12 volts.
6-pin Most modern computer power supplies include 6-pin connectors which are generally used for PCI
Express graphics cards, but a newly introduced 8-pin connector should be seen on the latest model power
supplies. Each PCI Express 6-pin connector can output a maximum of 75 W.
6+2 pin For the purpose of backwards compatibility, some connectors designed for use with high end PCI
Express graphics cards feature this kind of pin configuration. It allows either a 6-pin card or an 8-pin card to
be connected by using two separate connection modules wired into the same sheath: one with 6 pins and
another with 2 pins.
A IEC 60320 C14 connector with an appropriate C13 cord is used to attach the power supply to the local
power grid.
66 | P a g e
Computer Hardware
A PC motherboard is the main circuit board within a typical desktop computer, laptop or server. Its main
functions are as follows:
to serve as a central backbone to which all other modular parts such as CPU, RAM, and hard drives can
be attached as required to create a modern computer;
to accept (on many motherboards) different components (in particular CPU and expansion cards) for the
purposes of customization;
to distribute power to PC components;
to electronically co-ordinate and interface the operation of the components.
As new generations of components have been developed, the standards of motherboards have changed too;
for example, with AGP being introduced, and more recently PCI Express. However, the standardized size
and layout of motherboard have changed much more slowly, and are controlled by their own standards. The
list of components a motherboard must include changes far more slowly than the components themselves.
For example, north bridge controllers have changed many times since their introduction, with many
manufacturers bringing out their own versions, but in terms of form factor standards, the requirement to
allow for a north bridge has remained fairly static for many years.
Although it is a slower process, form factors do evolve regularly in response to changing demands. The
original PC standard (AT) was superseded in 1995 by the current industry standard ATX, which still dictates
the size and design of the motherboard in most modern PCs. The latest update to the ATX standard was
released in 2007. A divergent standard by chipset manufacturer VIA called EPIA (also known as ITX, and
not to be confused with EPIC) is based upon smaller form factors and its own standards.
Differences between form factors are most apparent in terms of their intended market sector, and involve
variations in size, design compromises and typical features. Most modern computers have very similar
requirements, so form factor differences tend to be based upon subsets and supersets of these. For example,
a desktop computer may require more sockets for maximal flexibility and many optional connectors and
other features on-board, whereas a computer to be used in a multimedia system may need to be optimized
for heat and size, with additional plug-in cards being less common. The smallest motherboards may sacrifice
CPU flexibility in favor of a fixed manufacturer's choice.
67 | P a g e
Computer Hardware
Comparisons
Tabular information
Typical
Form factor
Originated
Max. size
feature-set
(compared
to ATX)
Typical
CPU
flexibility
Power
Notes
handling
IBM 1983
8.5 11 in
216 279 mm
AT (Advanced
Technology)
12 1113 in
IBM 1984
305 279
330 mm
Baby-AT
IBM 1985
8.5 1013 in
216 254
330 mm
ATX
Intel 1996
12 9.6 in
305 244 mm
SSI CEB
SSI
12 10.5 in
305 267 mm
SSI EEB
SSI
12 13 in
305 330 mm
SSI MEB
SSI
16.2 13 in
411 330 mm
68 | P a g e
Computer Hardware
A smaller variant of the ATX form factor
microATX
1996
9.6 9.6 in
244 244 mm
Mini-ATX
A Open 2005
5.9 5.9 in
150 150 mm
9.0 7.5 in
FlexATX
Intel 1999
228.6 190.5 mm
max.
Mini-ITX
Nano-ITX
VIA 2001
VIA 2003
Pico-ITX
VIA 2007
Mobile-ITX
VIA 2007
6.7 6.7 in
170 170 mm
max.
boxes.
4.7 4.7 in
120 120 mm
100 72 mm
max.
2.953 1.772 in
75 45 mm
A standard proposed by Intel as a successor to
ATX in the early 2000s, according to Intel the
layout has better cooling. BTX Boards are
flipped in comparison to ATX Boards, so a
12.8 10.5 in
BTX (Balanced
Technology
Intel 2004
325 267 mm
max.
Extended)
each other.
Processor is placed closest to the fan. May
contain a CNR board.
MicroBTX or
uBTX
10.4 10.5 in
Intel 2004
264 267 mm
max.
8.0 10.5 in
PicoBTX
Intel 2004
203 267 mm
max.
DTX
AMD 2007
Mini-DTX
AMD 2007
smart Module
Digital-Logic
200 244 mm
max.
200 170 mm
max.
66 85 mm
69 | P a g e
Computer Hardware
ETX
COM Express
Basic
COM Express
Compact
Kontron
95 114 mm
PICMG
95 125 mm
PICMG
95 95 mm
nanoETXexpress
Kontron
55 84 mm
Core Express
SFF-SIG
58 65 mm
Extended
ATX(EATX)
Unknown
12 13 in
305 330 mm
9 1113 in
LPX
Unknown
229 279
330 mm
89 1011 in
Mini-LPX
Unknown
203229 254
279 mm
PC/104
PC/104-Plus
PC/104
Consortium1992
PC/104
Consortium1997
PCI/104-
PC/104
Express
Consortium2008
PCIe/104
NLX
UTX
WTX
HPTX
PC/104
Consortium2008
Intel 1999
TQ-Components
2001
Intel 1998
EVGA 2008
3.8 3.6 in
3.8 3.6 in
3.8 3.6 in
89 1013.6 in
203229 254
345 mm
88 108 mm
14 16.75 in
355.6 425.4 mm
13.6 15 in
70 | P a g e
Computer Hardware
345.44 381 mm
XTX
2005
95 114 mm
baseboard.
ATX
MicroATX
FlexATX
DTX
Mini-DTX/DTX 2
Mini-ITX
71 | P a g e
Computer Hardware
72 | P a g e
Computer Hardware
AT vs. ATX
There are two basic differences between AT and ATX power supplies: The connectors that provide power to
the motherboard, and the soft switch. On older AT power supplies, the Power-on switch wire from the front
of the computer is connected directly to the power supply.
On newer ATX power supplies, the power switch on the front of the computer goes to the motherboard over
a connector labeled something like; PS ON, Power SW, SW Power, etc. This allows other hardware and/or
software to turn the system on and off.
The motherboard controls the power supply through pin #14 of the 20 pin connector or #16 of the 24 pin
connector on the motherboard. This pin carries 5V when the power supply is in standby. It can be grounded
to turn the power supply on without having to turn on the rest of the components. This is useful for testing or
to use the computer ATX power supply for other purposes.
AT stands for Advanced Technology when ATX means Advanced Technology extended.
Laptops
Most portable computers have power supplies that provide 25 to 200 watts. In portable computers (such
as laptops) there is usually an external power supply (sometimes referred to as a "power brick" due to its
similarity, in size, shape and weight, to a real brick) which converts AC power to one DC voltage (most
commonly 19 V), and further DC-DC conversion occurs within the laptop to supply the various DC voltages
required by the other components of the portable computer.
Servers
Some web servers use a single-voltage 12 volt power supply. All other voltages are generated by voltage
regulator modules on the motherboard.
Energy efficiency
Computer power supplies are generally about 7075% efficient. That means in order for a 75% efficient
power supply to produce 75 W of DC output it would require 100 W of AC input and dissipate the
remaining 25 W in heat. Higher-quality power supplies can be over 80% efficient; higher energy
efficient PSU's waste less energy in heat, and requires less airflow to cool, and as a result will be quieter.
Google's server power supplies are more than 90% efficient. HP's server power supplies have reached 94%
efficiency. Standard PSUs sold for server workstations have around 90% efficiency, as of 2010.
73 | P a g e
Computer Hardware
It's important to match the capacity of a power supply to the power needs of the computer. The energy
efficiency of power supplies drops significantly at low loads. Efficiency generally peaks at about 5075%
load. The curve varies from model to model (examples of how this curve looks can be seen on test reports of
energy efficient models found on the PLUS website). As a rule of thumb for standard power supplies it is
usually appropriate to buy a supply such that the calculated typical consumption of one's computer is about
60% of the rated capacity of the supply provided that the calculated maximum consumption of the computer
does not exceed the rated capacity of the supply. Note that advice on overall power supply ratings often
given by the manufacturer of single component, typically graphics cards, should be treated with great
skepticism. These manufacturers want to minimize support issues due to under rating of the power supply
specifications and advise customers to use a more powerful power supply to avoid these issues.
Various initiatives are underway to improve the efficiency of computer power supplies. Climate savers
computing initiative promotes energy saving and reduction of greenhouse gas emissions by encouraging
development and use of more efficient power supplies. 80 PLUS certifies power supplies that meet certain
efficiency criteria, and encourages their use via financial incentives. On top of that the businesses end up
using less electricity to cool the PSU and the computer's themselves and thus save an initially large sum(i.e.
incentive + saved electricity = higher profit).
Facts
Life span is usually measured in mean time between failures (MTBF). Higher MTBF ratings are
preferable for longer device life and reliability. Quality construction consisting of industrial
grade electrical components and/or a larger or higher speed fan can help to contribute to a higher MTBF
rating by keeping critical components cool, thus preventing the unit from overheating. Overheating is a
major cause of PSU failure. MTBF value of 100,000 hours (about 11 years continuous operation) is not
uncommon.
74 | P a g e
Computer Hardware
Power supplies may have passive or active power factor correction (PFC). Passive PFC is a simple way
of increasing the power factor by putting a coil in series with the primary filter capacitors. Active PFC is
more complex and can achieve higher PF, up to 99%.
In computer power supplies that have more than one +12V power rail, it is preferable for stability
reasons to spread the power load over the 12V rails evenly to help avoid overloading one of the rails on
the power supply.
Multiple 12V power supply rails are separately current limited as a safety feature; they are not
generated separately. Despite widespread belief to the contrary, this separation has no effect on
mutual interference between supply rails.
The ATX12V 2.x and EPS12V power supply standards defer to the IEC 60950 standard, which
requires that no more than 240 volt-ampsbe present between any two accessible points. Thus, each
wire must be current-limited to no more than 20 A; typical supplies guarantee 18 A without
triggering the current limit. Power supplies capable of delivering more than 18 A at 12 V connect
wires in groups to two or more current sensors which will shut down the supply if excess current
flows. Unlike a fuse or circuit breaker, these limits reset as soon as the overload is removed.
Because of the above standards, almost all high-power supplies claim to implement separate rails,
however this claim is often false; many omit the necessary current-limit circuitry, both for cost
reasons and because it is an irritation to customers. (The lack is sometimes advertised as a feature
under names like "rail fusion" or "current sharing".)
When the computer is powered down but the power supply is still on, it can be started remotely
via Wake-on-LAN and Wake-on-ring or locally via Keyboard Power ON (KBPO) if the motherboard
supports it.
Early PSUs used a conventional (heavy) step-down transformer, but most modern computer power
supplies are a type of switched-mode power supply (SMPS) with a ferrite-cored high
frequency transformer.
Computer power supplies may have short circuit protection, overpower (overload) protection,
overvoltage protection, under voltage protection, overcurrent protection, and over temperature
protection.
Some power supplies come with sleeved cables, which is aesthetically nicer, makes wiring easier and
cleaner and have less detrimental effect on airflow.
Since supplies are self-certified, a manufacturer's claimed output may be double or more what is actually
provided. Although a too-large power supply will have an extra margin of safety as far as not overloading, a larger unit is often less efficient at lower loads (under 20% of its total capability) and therefore
will waste more electricity than a more appropriately sized unit. Additionally, computer power supplies
generally do not function properly if they are too lightly loaded. (Less than about 15% of the total load.)
Under no-load conditions they may shut down or malfunction. For this reason the no-load protection was
introduced in some power supplies.
The most important factor for judging PSUs suitability for certain graphics cards is the PSUs total 12V
output, as it is that voltage on which modern graphics cards operate. If the total 12V output stated on the
PSU is higher than the suggested minimum of the card, then that PSU can fully supply the card. It is
however recommended that a PSU should not just cover the graphics cards' demands, as there are other
components in the PC that depend on the 12 V output.
Power supplies can feature magnetic amplifiers or double-forward converter circuit design.
75 | P a g e
Computer Hardware
Wiring diagrams
24-pin ATX12V 2.x power supply connector
(20-pin omits the last four: 11, 12, 23 and 24)
Color
Signal
P8.1
Power Good
Red
P8.2
+5 V
Yellow
P8.3
+12 V
Blue
P8.4
12 V
Black
P8.5
Ground
Black
P8.6
Ground
Black
P9.1
Ground
Black
P9.2
Ground
N/C
P9.3
5 V
Red
P9.4
+5 V
Red
P9.5
+5 V
Red
P9.6
+5 V
+3.3 V
Orange
+3.3 V sense
Brown
13
Orange
+3.3 V
14 12 V
Blue
Black
Ground
15 Ground
Black
+5 V
16 Power on
Green
Ground
17 Ground
Black
+5 V
18 Ground
Black
Black
Ground
19 Ground
Black
Grey
Power good
20 Reserved
N/C
Purple
+5 V standby
21 +5 V
Red
Yellow
+12 V 10
22 +5 V
Red
Yellow
+12 V 11
23 +5 V
Red
Orange
+3.3 V 12
24 Ground
Black
Signal
Orange
Color
+3.3 V
Red
Pin
Signal
Orange
Pin Pin
Red
Black
76 | P a g e
Computer Hardware
Industry Standard Architecture (ISA) is a computer bus standard for IBM PC compatible computers
introduced with the IBM Personal Computer to support its Intel 8088 microprocessor's 8-bit external data
bus and extended to 16 bits for the IBM Personal Computer/AT's Intel 80286 processor. The ISA bus was
further extended for use with 32-bit processors as Extended Industry Standard Architecture (EISA). For
general desktop computer use it has been supplanted by later buses such as IBM Micro Channel, VESA
Local Bus, Peripheral Component Interconnect and other successors. A derivative of the AT bus structure is
still used in the PC/104 bus, and internally within Super I/O chips.
77 | P a g e
Computer Hardware
1988
Gang of Nine
PCI (1993)
32
1 per slot
8.33 MHz
Parallel
No
No
The Extended Industry Standard Architecture (in practice almost always shortened to EISA and frequently
pronounced "eee-suh") is a bus standard for IBM PC compatible computers. It was announced in late 1988
by PC clone vendors (the "Gang of Nine") as a counter to IBM's use of its proprietary Micro Channel
architecture (MCA) in its PS/2 series.
EISA extends the AT bus, which the Gang of Nine retroactively renamed to the ISA bus to avoid infringing
IBM's trademark on its PC/AT computer, to 32 bits and allows more than one CPU to share the bus. The bus
mastering support is also enhanced to provide access to 4 GB of memory. Unlike MCA, EISA can accept
older XT and ISA boards the lines and slots for EISA are a superset of ISA.
EISA was much favoured by manufacturers due to the proprietary nature of MCA, and even IBM produced
some machines supporting it. It was somewhat expensive to implement (though not as much as MCA), so it
never became particularly popular in desktop PCs. However, it was reasonably successful in the server
market, as it was better suited to bandwidth-intensive tasks (such as disk access and networking). Most
EISA cards produced were either SCSI or network cards. EISA was also available on some non-IBM
compatible machines such as the AlphaServer, HP 9000-D, SGI Indigo2 and MIPS Magnum.
By the time there was a strong market need for a bus of these speeds and capabilities, the VESA Local Bus
and later PCI filled this niche and EISA vanished into obscurity
78 | P a g e
Computer Hardware
Year created
Created by
Supersedes
Superseded by
Width in bits
Capacity
Style
Hotplugging interface
External interface
1987
IBM
ISA
PCI (1993)
16 or 32
10 MHz
Parallel
no
no
Micro Channel Architecture (MCA) was a proprietary 16- or 32-bit parallel computer bus introduced by
IBM in 1987 which was used on PS/2 and other computers through the mid-1990s.
79 | P a g e
Computer Hardware
Three 5-volt 32-bit PCI expansion slots on a motherboard (PC bracket on left side)
Year created
Created by
Supersedes
Superseded by
Width in bits
Capacity
Style
Hotplugging interface
July 1993
Intel
ISA, EISA, MCA, VLB
PCI Express (2004)
32 or 64
133 MB/s (32-bit at 33 MHz)
266 MB/s (32-bit at 66 MHz or
64-bit at 33 MHz)
533 MB/s (64-bit at 66 MHz)
Parallel
Optional
Conventional PCI (PCI is an initialism formed from Peripheral Component Interconnect, part of the PCI
Local Bus standard and often shortened to PCI) is a computer bus for attaching hardware devices in a
computer. These devices can take either the form of an integrated circuit fitted onto the motherboard itself,
called a planar device in the PCI specification, or an expansion card that fits into a slot. The PCI Local Bus
was implemented in PCs, where it displaced ISA and VESA Local Bus as the standard expansion bus, and it
in other computer types. PCI is being replaced by PCI-X and PCI Express, but as of 2011, many
motherboards are still made with one or more PCI slots
The PCI specification covers the physical size of the bus (including the size and spacing of the circuit board
edge electrical contacts), electrical characteristics, bus timing, and protocols. The specification can be
purchased from the PCI Special Interest Group (PCI-SIG).
Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as USB or
serial, TV tuner cards and disk controllers. PCI video cards replaced ISA cards until growing bandwidth
requirements outgrew the capabilities of PCI; the preferred interface for video cards became AGP, and then
PCI Express. PCI video cards remain available for use with old PCs without AGP or PCI Express slots.
Many devices previously provided on expansion cards are either commonly integrated onto motherboards,
or more commonly available in USB and PCI Express versions. Modern PCs often have no cards fitted.
However, PCI is still used for certain specialized cards
Compiled by IT DIVISION, NAITA
80 | P a g e
Computer Hardware
1997
Intel
PCI Express (2004)
32
1 device/slot
up to 2133 MB/s
Parallel
81 | P a g e
Computer Hardware
1. PCI Express 4
2. PCI Express 16
Year created
Created by
Supersedes
Width in bits
Number of devices
Capacity
Style
Hotplugging interface
External interface
2004
Intel Dell IBM HP
AGP PCI PCI-X
132
One device each on each endpoint of each connection.
PCI Express switches can create multiple endpoints out of one endpoint to allow sharing one
endpoint with multiple devices.
Per lane (each direction):
v1.x: 250 MB/s (2.5 GT/s)
v2.x: 500 MB/s (5 GT/s)
v3.0: 1 GB/s (8 GT/s)
16 lane slot (each direction):
v1.x: 4 GB/s (40 GT/s)
v2.x: 8 GB/s (80 GT/s)
v3.0: 16 GB/s (128 GT/s)
Serial
Yes, if Express Card or PCI Express Express Module
Yes, with PCI Express External Cabling such as Intel Thunderbolt.
PCI Express officially abbreviated as PCIe, is a computer expansion card standard designed to replace the older PCI,
PCI-X, and AGP bus standards. PCIe has numerous improvements over the aforementioned bus standards, including
higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performancescaling for bus devices, a more detailed error detection and reporting mechanism, and native hot plug functionality.
More recent revisions of the PCIe standard support hardware I/O virtualization.
The PCIe electrical interface is also used in a variety of other standards, most notably Express Card, a laptop
expansion card interface.
Format specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group), a group of more
than 900 companies that also maintain the Conventional PCI specifications. PCIe 3.0 is the latest standard for
expansion cards that is available on mainstream personal computers.
82 | P a g e
Computer Hardware
83 | P a g e