You are on page 1of 40

LOGO

Princess Sumaya University for Technology

Computer
Architecture
Dr. Esam Al_Qaralleh

Review
computer arctecture ~ PSUT

Course Syllabus

Grading
The first priority: Maximize Learning
Your grade will depend on how much you have
learned
2 midterm exams (40% to 50%).
Quizzes and homework's (20% to 10%).
Final Exam (40%)
Activity in the class
Questions and discussion in the class give
you points and improve the quality of
teaching.
computer arctecture ~ PSUT

Course Syllabus

Text Book
Structured Computer Organization
Andrew S. Tanenbaum
Prentice-Hall, 4th Edition, 1999

Reference Books
Computer Architecture; A Quantative Approach
John L. Hennessy & David A. Patterson
Morgan Kaufmann, 3ed Edition, 2003
Computer System Architecture
M. Morris Mano
Prentice Hall, 3ed Edition, 1993
computer arctecture ~ PSUT

Course Syllabus

Class Rules
Attendance to class should be on time
Very late attendance will be count as
an absent.

Using Mobile Phone is NOT


ALLOWED during class.
Questions and Discussions is the
best way to communicate

computer arctecture ~ PSUT

What is a digital computer ?

A digital computer is a machine


composed of the following three basic
components
Input/Output
Central Processing Unit (CPU)
Memory

03/01/15

Computer Architecture

The Von Neumann Machine, 1945

The Von Neumann model consists of five


major components:
input unit
output unit
ALU
memory unit
control unit.

Sequential Execution
computer arctecture ~ PSUT

Von Neumann Model


A refinement of the Von Neumann model, the system bus model
has a CPU (ALU and control), memory, and an input/output unit.
Communication among components is handled by a shared
pathway called the system bus, which is made up of the data
bus, the address bus, and the control bus. There is also a power
bus, and some architectures may also have a separate I/O bus.

computer arctecture ~ PSUT

03/01/15

Computer Architecture

The CPU
CPU (central processing unit) is an older term for processor and
microprocessor, the central unit in a computer containing the logic
circuitry that performs the instructions of a computer's programs.
NOTABLE TYPES
- RISC: Reduced Instruction Set Computer
-Introduced in the mid 1980s
-Requires few transistors
-capable of executing only a very limited set of
instructions
- CISC: Complex Instruction Set Computer
-complex CPUs that had ever-larger sets of instructions
03/01/15

Computer Architecture

10

RISC or CISC The great Controversy


RISC proponents argue that RISC machines are both cheaper
and faster, and are therefore the machines of the future.
Skeptics note that by making the hardware simpler, RISC
architectures put a greater burden on the software. They argue
that this is not worth the trouble because conventional
microprocessors are becoming increasingly fast and cheap
anyway.
The TRUTH!
CISC and RISC implementations are becoming more and more
alike. Many of today's RISC chips support as many
instructions as yesterday's CISC chips. And today's CISC
chips use many techniques formerly associated with RISC
chips.

03/01/15

Computer Architecture

11

Under the hood of a typical CPU

03/01/15

Computer Architecture

12

What you need to Know about a CPU


Processing speed
- The clock Frequency is one measure of how fast a computer is ( however,
the length of time to carry out an operation depends not only on how fast the
processor cycles, but how many cycles are required to perform a given
operation.
Voltage requirement
Transistors (electronic switches) in the CPU requires some voltage to trigger
them.
- In the pre-486DX66 days, everything was 5 volts
- As chips got faster and power became a concern,
designers dropped the chip voltage down to 3.3 volts (external Voltage) and
2.9V or 2.5V core voltage
03/01/15

Computer Architecture

13

More on Voltage Requirements


Power consumption equates largely with heat generation,
which is a primary enemy in achieving increased performance.
Newer processors are larger and faster, and keeping them cool
can be a major concern.
Reducing power usage is a primary objective for the designers
of notebook computers, since they run on batteries with a
limited life. (They also are more sensitive to heat problems
since their components are crammed into such a small space).
Compensate for by using lower-power semiconductor
processes, and shrinking the circuit size and die size. Newer
processors reduce voltage levels even more by using what is
called a dual voltage, or split rail design
03/01/15

Computer Architecture

14

More on Dual Voltage Design


A split rail processor uses two different voltages.
The external or I/O voltage is higher, typically
3.3V for compatibility with the other chips on
the motherboard.
The internal or core voltage is lower: usually
2.5 to 2.9 volts. This design allows these lowervoltage CPUs to be used without requiring
wholesale changes to motherboards, chipsets
etc.
03/01/15

Computer Architecture

15

Power consumption verses speed of some processors

03/01/15

Computer Architecture

16

MEMORY
Computers have hierarchies of memories that may be classified according to
Function, Capacity and Response Times.
-Function
"Reads" transfer information from the memory; "Writes" transfer information to
the memory:
-Random Access Memory (RAM) performs both reads and writes.
-Read-Only Memory (ROM) contains information stored at the
time of manufacture that can only be read.
-Programmable Read-Only Memory (PROM) is ROM that can be written once
at some point after manufacture.
-Capacity
bit = smallest unit of memory (value of 0 or 1);
byte = 8 bits;
In modern computers, the total memory may range from say 16 MB in a small
personal computer to several GB (gigabytes) in large supercomputers.
03/01/15

Computer Architecture

17

More on memory

Memory Response
Memory response is characterized by two different measures:
-Access Time (also termed response time or latency) defines how
quickly the memory can respond to a read or write request.
-Memory Cycle Time refers to the minimum period between two
successive requests of the memory.
-Access times vary from about 80 ns [ns = nanosecond = 10^(-9)
seconds] for chips in small personal computers to about 10 ns or less
for the fastest chips in caches and buffers. For various reasons, the
memory cycle time is more than the speed of the memory chips (i.e.,
the length of time between successive requests is more than the 80
ns speed of the chips in a small personal computer).
03/01/15

Computer Architecture

18

03/01/15

Computer Architecture

19

The I/O BUS


A Computer transfers data from disk to CPU, from
CPU to memory, or from memory to the display
adapter etc.
To avoid having a separate circuits between every pair
of devices, the BUS is used.
Definition:
The Bus is simply a common set of wires that
connect all the computer devices and chips together

03/01/15

Computer Architecture

20

Different functions for Different wires of the bus


Some of these wires are used to transmit data.
Some send housekeeping signals, like the clock pulse. Some transmit a number
(the "address") that identifies a particular device or memory location
Use of the address
The computer chips and devices watch the address wires and respond when
their identifying number (address) is transmitted before they can transfer data
Problem!
Starting with machines that used the 386 CPU, CPUs and memory ran faster
than other I/O devices
Solution
- Separate the CPU and memory from all the I/O. Today, memory is only added
by plugging it into special sockets on the main computer board.
03/01/15

Computer Architecture

21

Bus Speeds
Multiple Buses with different speeds is an option or a single bus
supporting different speeds is used
In a modern PC, there may be a half dozen different Bus areas.
There is certainly a "CPU area" that still contains the CPU,
memory, and basic control logic.
There is a "High Speed I/O Device" area that is either a VESA
Local Bus (VLB) or an PCI Bus

03/01/15

Computer Architecture

22

Some Bus Standards

ISA (Industry Standard Architecture) bus


In 1987 IBM introduced a new
Microchannel (MCA) bus
The other vendors developed an
extension of the older ISA interface called
EISA
VESA Local Bus (VLB), which became
popular at the start of 1993
03/01/15

Computer Architecture

23

More Bus Standards


The PCI bus was developed by Intel
PCI is a 64 bit interface in a 32 bit package
The PCI bus runs at 33 MHz and can transfer 32 bits of data
(four bytes) every clock tick.
That sounds like a 32-bit bus! However, a clock tick at 33 MHz
is 30 nanoseconds, and memory only has a speed of 70
nanoseconds. When the CPU fetches data from RAM, it has to
wait at least three clock ticks for the data. By transferring data
every clock tick, the PCI bus can deliver the same throughput
on a 32 bit interface that other parts of the machine deliver
through a 64 bit path.
03/01/15

Computer Architecture

24

Things to know about I/O Bus


Buses transfer information between parts of a computer.
Smaller computers have a single bus; more advanced
computers have complex interconnection strategies.
Things to know about the bus
Transaction = Unit of communication on bus.
Bus Master = The module controlling the bus at a particular
time.
Arbitration Protocol = Set of signals exchanged to decide
which of two competing modules will control a bus at a
particular time.
Communication Protocol = Algorithm used to transfer data on
the bus.
Asynchronous Protocol = Communication algorithm that can
begin at any time; requires overhead to notify receivers that
transfer is about to begin.
03/01/15

Computer Architecture

25

Things to know about the bus continued

Synchronous Protocol = Communication algorithm


that can begin only at well-know times defined by a
global clock.
Transfer Time = Time for data to be transferred over
the bus in single transaction.
Bandwidth = Data transfer capacity of bus; usually
expressed in bits per second (bps). Sometimes
termed throughput.
Bandwidth and Transfer Time measure related
things, but bandwidth takes into account required
overheads and is usually a more useful measure of
the speed of the bus.
03/01/15

Computer Architecture

26

PERFORMANCE AND
APPLICATION CHANGE
OVER TIME

computer arctecture ~ PSUT

27

Performance

Both Hardware and Software affect


performance:
Algorithm determines number of sourcelevel statements
Language/Compiler/Architecture determine
machine instructions
Processor/Memory determine how fast
instructions are executed

computer arctecture ~ PSUT

28

Tasks of Computer Architects


Computer architects must design a computer to meet
functional requirements as well as price, power, and
performance goals. Often, they also have to determine what
the functional requirements are, which can be a major task.
Once a set of functional requirements has been established,
the architect must try to optimize the design.
Here are three major application areas and their main
requirements:
Desktop computers: focus on optimizing cost-performance as
measured by a single user, with little regard for program size or
power consumption,
Server computers focus on availability, scalability, and throughput
cost-performance,
Embedded computers driven by price and often power issues, plus
code size is important.
computer arctecture ~ PSUT

29

Where is the Market?

computer arctecture ~ PSUT

30

Applications Change over Time


Data-sets & memory requirements larger

Cache & memory architecture become more critical

Standalone networked

IO integration & system software become more critical

Single task multiple tasks

Parallel architectures become critical

Limited IO requirements rich IO requirements

60s: tapes & punch cards


70s: character oriented displays
80s: video displays, audio, hard disks
90s: 3D graphics; networking, high-quality audio
00s: real-time video, immersion,
computer arctecture ~ PSUT

31

Application Properties to
Exploit in Computer Design
Locality in memory/IO references
Programs work on subset of instructions/data at any point in time
Both spatial and temporal locality

Parallelism
Data-level (DLP): same operation on every element of a data
sequence
Instruction-level (ILP): independent instructions within sequential
program
Thread-level (TLP): parallel tasks within one program
Multi-programming: independent programs
Pipelining

Predictability
Control-flow direction, memory references, data values
computer arctecture ~ PSUT

32

Levels of Machines

There are a number of levels in a computer,


from the user level down to the transistor
level.

computer arctecture ~ PSUT

33

Computer Architecture
A modern meaning of the term computer architecture
covers three aspects of computer design:
instruction set architecture,
computer organization and
computer hardware.

Instruction Set Architecture - ISA refers to the actual


programmer-visible machine interface such as instruction
set, registers, memory organization and exception handling.
Two main approaches: RISC and CISC architectures.
A computer organization and computer hardware are two
components of the implementation of a machine.

computer arctecture ~ PSUT

34

How Do the Pieces Fit Together?

Application
Operating
System
Compiler
Memory
system

Firmware

Instr. Set Proc.

Instruction Set
Architecture
I/O system

Datapath & Control


Digital Design
Circuit Design

computer arctecture ~ PSUT

35

Instruction Set Architecture (ISA)

Complex Instruction Set (CISC)


Single instructions for complex tasks (string
search, block move, FFT, etc.)
Usually have variable length instructions
Registers have specialized functions

Reduced Instruction Set (RISC)


Instructions for simple operations only
Usually fixed length instructions
Large orthogonal register sets
computer arctecture ~ PSUT

36

RISC and CISC Architecture

RISC Reduced Instruction Set Computer


CISC Complex (and Powerful) Instruction Set Computer
What does MIPS stand for?
Microprocessor without Interlocked Pipeline Stages.

MIPS processor is one of the first RISC processors. Again, all


processors announced after 1985 have been of RISC
architecture.
What is the main example of CISC architecture processor?
Intel processors (in over 90% computers).

computer arctecture ~ PSUT

37

RISC Architecture

RISC designers focused on two critical


performance techniques in computer
design:
the exploitation of instruction-level
parallelism, first through pipelining and later
through multiple instruction issue,
the use of cache, first in simple forms and
later using sophisticated organizations and
optimizations.

computer arctecture ~ PSUT

38

RISC ISA Characteristics


All operations on data apply to data in registers and
typically change the entire register;
The only operations that affect memory are load and
store operations that move data from memory to a
register or to memory from a register, respectively;
A small number of memory addressing modes;
The instruction formats are few in number with all
instructions typically being one size;
Large number of registers;
These simple properties lead to dramatic simplifications
in the implementation of advanced pipelining
techniques, which is why RISC architecture instruction
sets were designed this way.
computer arctecture ~ PSUT

39

LOGO

www.themegallery.com

You might also like