You are on page 1of 16

Computer Organization

The question- How does a computer work? is


concerned with computer organization. Computer
organization encompasses all physical aspects of
computer systems. The way in which various
circuits and structural components come together
to make up fully functional computer systems is
the way the system is organized.

fig-1.1

Computer Architecture
Computer architecture is concerned with the
structure and behavior of the computer system and
refers to all the logical aspects of system
implementation as seen through the eyes of a
programmer. For example, in multithreaded
programming, the implementation of mutexes and
semaphores to restrict access to critical regions
and to prevent race conditions among processes is
an architectural concern.

fig-1.2

Therefore,
What does a computer do? = Architectural
concern.

How does a computer work? = Organizational


concern.
Taking automobiles as an example, making a car is
a two-step process on a general level- 1) Making a
logical design of the car, 2) Making a physical
implementation of that design. The designer has to
decide on everything starting from how to design
the car to achieve maximum environmental
friendliness and what materials to use in order to
make the car cost effective and at the same time
make it look good. All this falls under the category
of architecture. When this design is actually
implemented in a car manufacturing plant and a
real car is built, we say that organization has taken
place.
What are the benefits of studying computer
architecture and organization?
Before delving into the technicalities,
commonsense tells us that as a user it is almost
perfectly all right to be operating a computer
without understanding what the computer does
internally to make magical things happen on the
screen, or how it does it.
if a computer science student (with a lack of
knowledge on computer architecture and
organization) were to write a code that does not
comply with the internal architecture and
organization of his/her computer then the

computer would behave strangely and in the end


the student would have to take it to some service
center and blindly rely on those guys to fix the
problems. If this is the case, then the only
difference between a user and a computer science
student is that the latter knows how to display the
words- Hello World onto the computer screen with
a couple of different programming languages.
Say, for example, if a game developer were to
design a game with the frames per second
property set to somewhere above 30 fps, his gameplay would become much faster but his CPU usage
would rise dramatically, making it very difficult for
the CPU to do much else while the game is running
and slowing down the computer noticeably. This is
because the frames per second value specifies
how many times the screen gets updated every
second. If the value is too large that would mean
too many updates per second and therefore more
and more of the power and attention of the CPU
would have to be focused on running the game. If
the value is too low, then CPU usage would fall
remarkably, but the game would become much
slower. Therefore, a value of about 30 would make
the game considerably fast without using up too
much of the CPU. Without this knowledge the game
developer would ignorantly be making inefficient
games that would struggle to make any
commercial impacts.

What are the factors that prevent us from


speeding up?
1) The fetch-execute cycle: The CPU generally
fetches an instruction from memory and then
executes it systematically. This whole process can
be cumbersome if the computer does not
implement a system such as pipelining. This is
because in general the fetch-execute cycle consists
of three stages:
a)
b)
c)

Fetch
Decode
Execute

Without a pipelining system, if the CPU is already


dealing with a particular instruction then it will only
consider another instruction once it is completely
done executing the current one; there is no chance
of concurrency. Therefore that slows down the
computer considerably as each instruction has to
wait for the other to finish first.
If, on the other hand, a pipelining system is
implemented then the CPU could begin to fetch a
new instruction while already decoding another. By
the time it finishes executing the initial instruction,
the second instruction will be ready to be
immediately executed. In that way the CPU would
be able to fetch, decode and execute more than
one instruction per clock cycle.

2) Hardware limitations: Each simple CPU


traditionally consists of one ALU (Arithmetic Logic
Unit), for example, and this restricts it to only being
able to handle one instruction per clock cycle. With
the rapid advancement in technology and the
resultant rise in computing demands this is clearly
not fast enough. Therefore a possible solution is to
take the architecture to the superscalar level.
A superscalar architecture is that which consists of
two or more functional units that can carry out two
or more instructions per clock cycle. A CPU could
be made with two ALUs, for example.

fig-1.3

The figure above shows the micro-architecture of a


processor. It consists of two ALUs, an on-board FPU
(Floating Point Unit) with its own set of floating
point registers, and a BPU (Branch Prediction Unit)
for handling branch instructions. The cache
memory on top allows the processor to fetch
instructions much faster .Clearly such a processor
is built for speed. With its four execution units it
could execute four instructions all at once.
3) Parallelism: A typical computer with a single
CPU can be tweaked to perform faster, but that
performance will always be limited, because after
all, how much can you really get out of a single

processor? But imagine having more than one


processor in a particular computer system.
Such systems do exist in the form of
multiprocessors and parallel computers. They
usually have between two and a few thousands of
interconnected processors- each having its own
private memory or having to share a common
memory. The obvious advantage with such a
system is that any task it undertakes, it can be
shared equally amongst all the processors by
allocating a certain part of the task to each
respective processor.
For example, if a calculation were to take ten hours
to complete on a conventional computer with one
CPU, it would take far less time in a multiprocessor
or parallel computer; i.e. - if the calculation could
be split up into ten chunks, each requiring one hour
to complete, then on a multiprocessor system with
ten processors it would take only one hour for the
entire calculation to complete because the
different chunks would be run in parallel.
4) Clock Speed and The Von Neumann
Bottleneck: The clock speed of a computer tells
us the rate at which the CPU operates. Some of the
first microcomputers had clock speeds in the range
of MHz (Megahertz). Nowadays we have speeds in
the range of GHz (Gigahertz).

One possible solution to achieving greater speed


would be to increase the clock speed. But however,
it turns out that increasing the clock speed alone
does not guarantee noticeable gains in speed and
performance. This is because the speed of the
processor is determined by the rate at which it can
retrieve data and instructions from memory.
Suppose a particular task takes ten units of time to
complete- eight units of time are spent waiting on
memory and the remaining two are spent on
processing. By doubling the clock speed without
improving the memory access time, the processing
time would be reduced from two to one unit of time
(without any change in the memory access time).
Therefore the overall gain in performance will be
only ten percent because the overall time is now
reduced from ten to nine units of time.
This limitation, caused by a mismatch in speed
between the CPU and memory, is known as the Von
Neumann Bottleneck. One simple solution to this
problem is to install a cache memory between the
CPU and main memory, which effectively increases
memory access time. Cache memory has faster
access time but comes with lower storage capacity
as compared to main memory and is more
expensive.

5) Branch prediction: The processor acts like a


psychic and predicts which branches or groups of
instructions it has to deal with next by looking at
the instruction code fetched from memory. In the
best case scenario if the processor guesses
correctly most of the time it can fetch the correct
instructions beforehand and buffer them- this will
allow the processor to remain busy most of the
time. There are various algorithms for
implementing branch prediction, some very
complex, often predicting multiple branches
beforehand. All of this is aimed at giving the CPU
more work to do and therefore optimizing speed
and performance.

A brief History of Computers

Computers truly came into their own as great inventions in the last two
decades of the 20th century. But their history stretches back more than 2500
years to the abacus: a simple calculator made from beads and wires, which is
still used in some parts of the world today. The difference between an ancient
abacus and a modern computer seems vast, but the principlemaking
repeated calculations more quickly than the human brainis exactly the
same.
One of the earliest machines designed to assist people in calculations was
the abacus which is still being used some 5000 years after its invention.
In 1642 Blaise Pascal (a famous French mathematician) invented an adding
machine based on mechanical gears in which numbers were represented by the
cogs on the wheels.
Englishman, Charles Babbage, invented in the 1830's a "Difference Engine"
made out of brass and pewter rods and gears, and also designed a further
device which he called an "Analytical Engine". His design contained the five key
characteristics of modern computers:1.
An input device
2.
Storage for numbers waiting to be processed
3.
A processor or number calculator
4.
A unit to control the task and the sequence of its calculations
5.
An output device
Augusta Ada Byron (later Countess of Lovelace) was an associate of Babbage
who has become known as the first computer programmer.
An American, Herman Hollerith, developed (around 1890) the first electrically
driven device. It utilised punched cards and metal rods which passed through
the holes to close an electrical circuit and thus cause a counter to advance. This
machine was able to complete the calculation of the 1890 U.S. census in 6
weeks compared with 7 1/2 years for the 1880 census which was manually
counted.
In 1936 Howard Aiken of Harvard University convinced Thomas Watson of IBM
to invest $1 million in the development of an electromechanical version of
Babbage's analytical engine. The Harvard Mark 1 was completed in 1944 and
was 8 feet high and 55 feet long.

At about the same time (the late 1930's) John Atanasoff of Iowa State
University and his assistant Clifford Berry built the first digital computer that
worked electronically, the ABC (Atanasoff-Berry Computer). This machine was
basically a small calculator.
In 1943, as part of the British war effort, a series of vacuum tube based
computers (named Colossus) were developed to crack German secret codes.
The Colossus Mark 2 series (pictured) consisted of 2400 vacuum tubes.

John Mauchly and J. Presper Eckert of the University of Pennsylvania developed


these ideas further by proposing a huge machine consisting of 18,000 vacuum
tubes. ENIAC (Electronic Numerical Integrator And Computer) was born in 1946.
It was a huge machine with a huge power requirement and two major
disadvantages. Maintenance was extremely difficult as the tubes broke down
regularly and had to be replaced, and also there was a big problem with
overheating. The most important limitation, however, was that every time a new
task needed to be performed the machine need to be rewired. In other words
programming was carried out with a soldering iron.
In the late 1940's John von Neumann (at the time a special consultant to the
ENIAC team) developed the EDVAC
(Electronic Discrete Variable Automatic Computer) which pioneered the "stored
program concept". This allowed programs to be read into the computer and so
gave birth to the age of general-purpose computers.

The Generations of Computers


It used to be quite popular to refer to computers as belonging to one of several
"generations" of computer. These generations are:The First Generation (1943-1958): This generation is often described as
starting with the delivery of the first commercial computer to a business client.
This happened in 1951 with the delivery of the UNIVAC to the US Bureau of the
Census. This generation lasted until about the end of the 1950's (although some
stayed in operation much longer than that). The main defining feature of the
first generation of computers was that vacuum tubes were used as internal

computer components. Vacuum tubes are generally about 5-10 centimeters in


length and the large numbers of them required in computers resulted in huge
and extremely expensive machines that often broke down (as tubes failed).
The Second Generation (1959-1964): In the mid-1950's Bell Labs
developed the transistor. Transistors were capable of performing many of the
same tasks as vacuum tubes but were only a fraction of the size. The first
transistor-based computer was produced in 1959. Transistors were not only
smaller, enabling computer size to be reduced, but they were faster, more
reliable and consumed less electricity.
The other main improvement of this period was the development of computer
languages.Assembler languages or symbolic languages allowed
programmers to specify instructions in words (albeit very cryptic words) which
were then translated into a form that the machines could understand (typically
series of 0's and 1's: Binary code). Higher level languages also came into
being during this period. Whereas assembler languages had a one-to-one
correspondence between their symbols and actual machine functions, higher
level language commands often represent complex sequences of machine
codes. Two higher-level languages developed during this period (Fortran and
Cobol) are still in use today though in a much more developed form.
The Third Generation (1965-1970): In 1965 the first integrated circuit
(IC) was developed in which a complete circuit of hundreds of components
were able to be placed on a single silicon chip 2 or 3 mm square. Computers
using these IC's soon replaced transistor based machines. Again, one of the
major advantages was size, with computers becoming more powerful and at the
same time much smaller and cheaper. Computers thus became accessible to a
much larger audience. An added advantage of smaller size is that electrical
signals have much shorter distances to travel and so the speed of computers
increased.
Another feature of this period is that computer software became much more
powerful and flexible and for the first time more than one program could share
the computer's resources at the same time (multi-tasking). The majority of
programming languages used today are often referred to as 3GL's (3rd
generation languages) even though some of them originated during the 2nd
generation.
The Fourth Generation (1971-present): The boundary between the third
and fourth generations is not very clear-cut at all. Most of the developments
since the mid 1960's can be seen as part of a continuum of gradual
miniaturisation. In 1970 large-scale integration was achieved where the
equivalent of thousands of integrated circuits were crammed onto a single
silicon chip. This development again increased computer performance

(especially reliability and speed) whilst reducing computer size and cost. Around
this time the first complete general-purpose microprocessor became available
on a single chip. In 1975 Very Large Scale Integration (VLSI) took the
process one step further. Complete computer central processors could now be
built into one chip. The microcomputer was born. Such chips are far more
powerful than ENIAC and are only about 1cm square whilst ENIAC filled a large
building.
During this period Fourth Generation Languages (4GL's) have come into
existence. Such languages are a step further removed from the computer
hardware in that they use language much like natural language. Many database
languages can be described as 4GL's. They are generally much easier to learn
than are 3GL's.
The Fifth Generation (the future): The "fifth generation" of computers were
defined by the Japanese government in 1980 when they unveiled an optimistic
ten-year plan to produce the next generation of computers. This was an
interesting plan for two reasons. Firstly, it is not at all really clear what the
fourth generation is, or even whether the third generation had finished yet.
Secondly, it was an attempt to define a generation of computers before they
had come into existence. The main requirements of the 5G machines was that
they incorporate the features of Artificial Intelligence, Expert Systems, and
Natural Language. The goal was to produce machines that are capable of
performing tasks in similar ways to humans, are capable of learning, and are
capable of interacting with humans in natural language and preferably using
both speech input (speech recognition) and speech output (speech synthesis).
Such goals are obviously of interest to linguists and speech scientists as natural
language and speech processing are key components of the definition. As you
may have guessed, this goal has not yet been fully realised, although significant
progress has been made towards various aspects of these goals.

Parallel Computing
Up until recently most computers were serial computers. Such computers had a
single processor chip containing a single processor. Parallel computing is based
on the idea that if more than one task can be processed simultaneously on
multiple processors then a program would be able to run more rapidly than it
could on a single processor. The supercomputers of the 1990s, such as the Cray
computers, were extremely expensive to purchase (usually over $1,000,000)
and often required cooling by liquid helium so they were also very expensive to
run. Clusters of networked computers (eg. a Beowulf culster of PCs running
Linux) have been, since 1994, a much cheaper solution to the problem of fast
processing of complex computing tasks. By 2008, most new desktop and laptop
computers contained more than one processor on a single chip (eg. the Intel
"Core 2 Duo" released in 2006 or the Intel "Core 2 Quad" released in 2007).
Having multiple processors does not necessarily mean that parallel computing

will work automatically. The operating system must be able to distribute


programs between the processors (eg. recent versions of Microsoft Windows
and Mac OS X can do this). An individual program will only be able to take
advantage of multiple processors if the computer language it's written in is able
to distribute tasks within a program between multiple processors. For example,
OpenMP supports parallel programming in Fortran and C/C++.

A Brief History of Computer Technology


A complete history of computing would include a multitude of diverse devices such as
the ancient Chinese abacus, the Jacquard loom (1805) and Charles Babbage's
``analytical engine'' (1834). It would also include discussion of mechanical, analog
and digital computing architectures. As late as the 1960s, mechanical devices, such as
the Marchant calculator, still found widespread application in science and engineering.
During the early days of electronic computing devices, there was much discussion
about the relative merits of analog vs. digital computers. In fact, as late as the 1960s,
analog computers were routinely used to solve systems of finite difference equations
arising in oil reservoir modeling. In the end, digital computing devices proved to have
the power, economics and scalability necessary to deal with large scale computations.
Digital computers now dominate the computing world in all areas ranging from the
hand calculator to the supercomputer and are pervasive throughout society. Therefore,
this brief sketch of the development of scientific computing is limited to the area of
digital, electronic computers.
The evolution of digital computing is often divided into generations. Each generation
is characterized by dramatic improvements over the previous generation in the
technology used to build computers, the internal organization of computer systems,
and programming languages. Although not usually associated with computer
generations, there has been a steady improvement in algorithms, including algorithms
used in computational science. The following history has been organized using these
widely recognized generations as mileposts.

3.1 The Mechanical Era (1623-1945)


3.2 First Generation Electronic Computers (1937-1953)
3.3 Second Generation (1954-1962)
3.4 Third Generation (1963-1972)
3.5 Fourth Generation (1972-1984)
3.6 Fifth Generation (1984-1990)

3.7 Sixth Generation (1990 - )

Name

SSI

MSI

LSI

VLSI

ULSI

Logic

Signification

Year

Transistors number[19]

small-scale integration

1964

1 to 10

1 to 12

1968

10 to 500

13 to 99

1971

500 to 20,000

100 to 9,999

1980

20,000 to 1,000,000

10,000 to 99,999

1984

1,000,000 and more

100,000 and more

medium-scale
integration

large-scale integration

very large-scale
integration

ultra-large-scale
integration

gates number[20]

You might also like