You are on page 1of 9

CHAPTER 01

[FUNDAMENTAL CONCEPTS OF COMPUTER ORGANIZATION & ARCHITECTURE]

Issues:
Definitions
Architectures and organizations
Structure and functions
Evolutions and performance
Definitions:
Computers are machines that are built with no specific application in mind, but rather are capable of performing
computation needed by a diversity of applications.

Computers are machines that can solve problems for people by carrying out instructions given to it.

There are various devices that are considered as computers. These devices (computers) exhibit variety in cost,
size, performance, application. In spite of this, certain fundamental concepts apply consistently throughout.

Super Computers

Mainframe Computers

Mini Computers What are their difference


between these?
Micro Computers (PCs)
Mobile Computers

A computer is an electronic device that has the ability to store, retrieve, and process data and can be
programmed with instructions that it remembers.
The physical parts that make up a computer are called hardware.
Programs that tell a computer what to do are called software.

1
Computer Organization and Architecture:
The distinction between computer architecture and organization:
Computer Organization:

The term 'Computer Organization' refers to the operational units of the computer and their
interconnections.
The main hardware units are the CPU, memory, I/O units, etc. The hardware details such as various
computational units, control signals, interfaces between computer peripherals, and the memory technology
used are included in the organization of the computer.
Therefore, it can be said that the hardware units and their connection details cover the organization of the
computer system.

N.B (simple definitions):

Computer organization is the view of the computer that is seen by the logic designer. This includes

Capabilities & performance characteristics of functional units (e.g., registers, ALU, shifters, etc.).

Ways in which these components are interconnected

How information flows between components

Logic and means by which such information flow is controlled

Computer Architecture:
By the term 'Computer architecture', it is generally meant those qualities of a system that are visible to a
programmer, such as the number of bits used to represent various data types, the instruction set of the
computer, techniques for addressing memory, methods used for input-output, etc.
Is the design of the computer at the hardware/software interface:
refers to those attributes of a system visible to a programmer
Attributes of computer arc.
Attributes of a
Instruction Set (what operations can be performed?)
computing system as
Instruction Format (how are instructions specified?) seen by the programmer
Data storage (where is data located?) or compiler
Addressing Modes (how is data accessed?)
Computer architecture is simply defined as the instructions a processor can execute

Current architectures include:

2
IA32 (x86), IA64, ARM, PowerPC

Structure and Function:


The designer need only deal with a particular level of the system at a time. At each level, the system consists of a set of
components and their interrelationships so, at each level, the designer is concerned with structure and function:

Structure: The way in which the components are interrelated

There are four main structural components inside a computer:

Central processing unit (CPU): Controls the operation of the computer and performs its data processing functions;
often simply referred to as processor.

Memory: Stores data.

I/O: Moves data between the computer and its external environment.

System interconnection: Some mechanism that provides for communication among CPU, main memory, and I/O. A
common example of system interconnection is by means of a system bus, consisting of a number of conducting wires to
which all the other components attach

Functions:
Is the operation of individual components as part of the structure:
The basic functions that a computer can perform. In general terms, there are only four
Data processing
Data storage
Data movement
3
Control

Computer evolutions and performance:


Computer evolutions:
The first electronic computer was designed and built at the University of Pennsylvania based on vacuum tube
technology. Vacuum tubes were used to perform logic operations and to store data.

Generations of computers has been divided into five according to the development of technologies used to fabricate the
processors, memories and I/O units.

First Generations: Vacuum tube based

(ENIAC - Electronic Numerical Integrator And Calculator

EDSAC Electronic Delay Storage Automatic Calculator

EDVAC Electronic Discrete Variable Automatic Computer

UNIVAC Universal Automatic Computer

IBM 701)

Vacuum tubes were used basic arithmetic operations took few milliseconds
1946 - 1959
Bulky
Consume more power with limited performance
High cost
Uses assembly language to prepare programs. These were translated into machine level language for execution.
Magnetic tape / magnetic drum were used as secondary memory
Mainly used for scientific computations.

Second Generation: Transistor based

(Manufacturers IBM 7030, Digital Data Corporations)

Transistors were used in place of vacuum tubes. (invented at AT&T Bell lab )
1959 - 1965
Small in size
Lesser power consumption and better performance
Lower cost
High level languages such as FORTRAN, COBOL etc were used - Compilers were developed to translate the high-
level program into corresponding assembly language program which was then translated into machine language
4
Third Generations: Integrated Circuit based.

(System 360 Mainframe from IBM, PDP-8 Mini Computer from Digital Equipment Corporation)

Small Scale Integration and Medium Scale Integration technology were implemented in CPU, I/O processors etc.
1965 - 1971
Smaller & better performance
Comparatively lesser cost
Faster processors
In the beginning magnetic core memories were used. Later they were replaced by semiconductor memories (RAM
& ROM)
Introduced microprogramming
Microprogramming, parallel processing (pipelining, multiprocessor system etc), multiprogramming, multi-user
system (time shared system) etc were introduced.
Operating system software were introduced

Fourth Generations: VLSI microprocessor based

(Intels 8088,80286,80386,80486 .., Motorolas 68000, 68030, 68040, Apple II, CRAY I/2/X/MP etc)

1971 - 1980
Microprocessors were introduced as CPU Complete processors and large section of main memory could be
implemented in a single chip
Tens of thousands of transistors can be placed in a single chip (VLSI design implemented)
CRT screen, laser & ink jet printers, scanners etc were developed.
Semiconductor memory chips were used as the main memory.
Secondary memory was composed of hard disks Floppy disks & magnetic tapes were used for backup memory
Introduced C language and Unix OS
Introduced Graphical User Interface

Fifth Generation: ULSI microprocessor based

(IBM notebooks, Pentium PCs-Pentium 1/2/3/4/Dual core/Quad core... SUN work stations, Origin 2000, PARAM
10000, IBM SP/2)

1980 to present
Computers based on artificial intelligence are available
Computers use extensive parallel processing, multiple pipelines, multiple processors etc

5
Introduced ULSI (Ultra Large Scale Integration) technology Intels Pentium 4 microprocessor contains 55 million
transistors millions of components on a single IC chip.
Introduced World Wide Web. (And other existing applications like e-mail, e Commerce, Virtual
libraries/Classrooms, multimedia applications etc.)
New operating systems developed Windows 95/98/XP/, LINUX, etc.

Summary on evolutions:

As technology improved, size of transistors decreased rapidly

This enabled more and more transistors to be packed in a small area (density increased)

Increased density => shorter electrical path between components

Increased operating speed

Improved performance

6
Computer performance:

Performance is measured by how long a processor takes to run a program


Clock:

All computers have a clock to determine when events take place:


One period of this clock is called clock cycle time

The speed of a computer processor, or CPU, is determined by the clock cycle


For example, a 4GHz processor performs 4,000,000,000 clock cycles per second.

Average number of clock cycles per instruction for a program is called clock cycle per instruction (CPI)
E.g. A computer program contains 10 instructions:
5 of the instructions take 1 clock cycle to execute
3 of the instructions take 2 clock cycles to execute
2 of the instructions take 3 clock cycles to execute
Q. What is the CPI for the program?

7
CPU time = Instruction count X CPI X Clock cycle time
Instruction count: number of instructions executed by a program
Less CPU time => Better performance

Response Time (Execution Time): Total time required for a computer to complete a task

Throughput (Bandwidth): Number of tasks completed per unit time

Simple Analogy example:

A guy makes a shoe in 1 hr, two shoes in 2 hrs and so on


Performance is improved:

If the same guy now makes a shoe in 30 min (may be technology improved), two shoes in 1hr (execution
time reduced)
If two guys make a shoe, each in 1 hr , two shoes in 1hr in total (throughput increased)

Improve performance techniques:

An Obvious solution to improve performance: Increase clock rate (clock speed)


(Increasing clock rate => Reducing response time=>Improved performance)

But as clock rate increases, power consumption (power dissipation) increases => Cooling harder
Techniques that improve response time:
Increasing clock rate
Cache(use cache memory)

Techniques that improve throughput:


Instruction-level parallelism (pipelining)
Multiple cores

8
Pipelining:
Processor fetch, decode, execute and write instructions at same time.
only improves throughput

Fig1: processing with pipelining

Fig 2: without pipe line (sequential)

Multiple core:
Modern microprocessors contain multiple processors (cores) on a single chip

You might also like