You are on page 1of 41

Introduction to

Computer Organization

What is Technology?
Bran Ferren:

Stuff that doesnt work yet.

Douglas Adams:

We no longer think of chairs as technology, we just think of


them as chairs. But there was a time when we hadnt worked
out how many legs chairs should have, how tall they should be,
and they would often crash when we tried to use them.
Before long, computers will be as trivial and plentiful as chairs
(and a couple of decades or so after that, as sheets of paper
or grains of sand) and we will cease to be aware of the
things.

What is a Computer?

a simple question
so lets begin with a simple answer:
task: try to write down 5 points which can describe any
computer

What is a Computer?

a rough definition (one of many) says that it must:

take input of some sort


produce output of some sort
process the information somehow
have some sort of information store
have some way of controlling what it does
~ Mark Burrell, in Fundamentals of Computer Architecture

What is a Computer?

notice that the rough definition doesnt even require


that a computer be a constructed machine

a human being can be a computer!

such as a person performing (part of) a calculation

in the past this actually happened


it often helped if they didnt understand the full
calculation... why?

answer: less likely to make short cuts and assumptions

Computer as Machine

computers as machines began to take hold as:

we learned how to design and construct them


they became faster than human computation
they became more reliable than human computation

it is this kind of computer well focus on from now

looking at its organization and its architecture

how hardware interacts with software

Organization and Architecture

Organization

how the computer is controlled, signalling methods, memory type


the physical aspects of the system: circuits and components
essentially: how it works

Architecture

logical and abstract aspects of the system from the programmers


perspective

instruction sets, instruction formats, operation codes, data types, ...


memory access methods, input/output mechanisms
essentially: how it is designed

Why is it worth understanding


organization and architecture?

possible motivations include:

knowing if a computation is practical

or even possible

learning how to optimize performance

by understanding the computers hardware and


instruction set

usually optimize the time taken for a computation


factoring in any trade-offs with space used and price

Why is it worth understanding


organization and architecture?

possible motivations include:

knowing how to benchmark a system

perhaps for a given set of uses

knowing if a computation is transferable to another


computer

will it run without modification?


if not, what modifications need to be made?

Principle of Equivalence of
Hardware and Software
Any task done by software can also be done using
hardware, and any operation performed directly by
hardware can be done using software.
~ Null & Lobur in Computer Organization & Architecture

think: your programs are algorithms...


...run by other algorithms, run by...
...until we reach the physical machine level, an algorithm
implemented as an electronic device

so its all algorithms

Principle of Equivalence of
Hardware and Software

so to perform a particular computation we have a choice:

take an all-in-hardware approach:


build a special-purpose computer

such as an embedded system

or

take an all-in-software approach:


use a general-purpose computer

and run the computation as a higher-level program

(or do something in-between)

Principle of Equivalence of
Hardware and Software

Implementing in Hardware versus Software:

benefits:

higher speed
possibly cheaper to initially produce
possibly more reliable

downfalls:

cannot adapt to meet changing requirements


replacing hardware is more costly than replacing software

A Potted History
Generation

Time Period

Key Technology

pre 1945

mechanical devices

1945-1955

valves (vacuum tubes)

1955-1965

transistors

1965-1980

integrated circuits

1980-present

LSI & VLSI

(dates are approximate)


adapted from Table 1.1, Chapter 1, Fundamentals of Computer Architecture

Generation 0: Mechanical Devices

circa 1630: Shickhard invents the Calculating Clock

6 digit addition & subtraction

1642: Pascal invents the Pascaline

another calculator, able to perform carry

circa 1680: Leibniz invents Stepped Reckoner

addition, subtraction, multiplication, division

all non-programmable, all without memory

Generation 0: Mechanical Devices

1822: Babbage invents Difference Engine

named after method of differences, solves polynomials

1833: Babbage invents Analytical Engine

performs any mathematical operation; Turing complete

never actually completed (government withdrew funding)

punch card programming based upon Jacquards 1801 weaving


loom

circa 1890: Hollerith (IBM founder) invents Hollerith Card

used for automated data processing for decades

Generation 1: with Vacuum Tubes

1936: Turing defines his Turing Machine

the practical grounding for computing machines

circa WWII: Turing and others invent Colossus;


Atanasoff invents ABC; Mauchly & Eckert invent ENIAC

the first electronic (digital) computers


all used vacuum tubes (valves)

allowed control of flow of electrons

but only ENIAC was general purpose

ENIAC

initial inspiration was desire to predict weather patterns

reduced time taken to calculate a table from 20 hours (using


human computers) to 30 seconds

worked in base 10

US military recognized potential for calculating tables for


ballistic trajectories

memory capacity of approximately 1000 bits


100 000 cycles per second
later used to test key calculations for atomic bomb program

ENIAC

US Army photograph

Generation 2: with Transistors

vacuum tubes were unreliable

often more downtime than uptime

1947: Bardeen, Bratain, Shockley invent the transistor

solid state
consumes less power than vacuum tube
more reliable

electrons behave better in solid than in vacuum

smaller (and smaller and smaller...)

computers still the preserve of very large organizations and universities

Generation 3: Integrated Circuits

1958: Kilby invents the Integrated Circuit, or Microchip

electronic circuit on a single plate


made of germanium
contains many transistors (and other components) in a very small
space; easy to mass produce

1959: Noyce (co-founder of Intel) uses silicon instead

the silicon chip is born

1971: Intel produces the Intel 4004

the first microprocessor


2300 transistors on a single chip

Generation 3: Integrated Circuits

mid 1960s-1970s: IBM produce System/360 family of computers

all solid-state components


used same assembly language

providing the novelty of compatibility


no need to re-write all those programs

circa 1960s: time-sharing and multi-programming appear

allows computer to perform multiple concurrent application


allows users to share a computer
major innovation, leading to computers appearing in small businesses

Generation 4: LSI & VLSI

LSI: Large Scale Integration

1000 to 10000 components per chip

VLSI: Very Large Scale Integration

more than 10000 components per chip


in 2005 more than 1 billion transistors put on a single chip
current record is over 20 billion
modern commercial example: Xeon Haswell-EP cpu has over
5.5 billion transistors

now: computers pervasive, getting smaller, faster, cheaper

Over the Years


Year

Technology Used

Relative Performance/Unit Cost

1951

vacuum tube

1965

transistor

35

1975

integrated circuit

900

1995

VLSI

2,400,000

2005

Ultra LSI

6,200,000,000
Figure 1.11, Chapter 1, Computer Organization & Design

Moores Law
the density of transistors in an integrated circuit will double
every year
~ Gordon Moore 1965

revised prediction:
the density of silicon chips doubles every 18 months

but is it true?

so, roughly correct


but cant hold true forever
physical limitations and Rocks Law will likely apply

~ taken from wikipedia commons

More Moores

Intel now believe that the density of silicon chips is doubling every 30 months
tick-tock strategy:

tick: shrink the process


tock: introduce the new microarchitecture
current tock is Skylake at 14nm
next tick will be Cannonlake at 10nm

getting smaller might need new materials


getting faster also happens by being smarter:

pipelining, parallelisation, distribution and delegation

other goals such as energy efficiency are now considered

Computer Level Hierarchy

Figure 1.3, Chapter 1, Computer Organization & Architecture

Computer Level Hierarchy

Level 6: User

applications
other levels invisible to most users most of the time

Level 5: High Level Language

such as Java, C, Fortran

a compiler or interpreter converts these to Assembly Language

user still abstracted from actual implementation of concepts


such as data types and memory storage

usually very efficiently

Computer Level Hierarchy

Level 4: Assembly Language

instructions here are directly translated to machine language


one-to-one translation
very little need for users to program at this level nowadays

Level 3: System Software

deals with operating system instructions


handles multiprogramming, protecting memory and so on
instructions translated from assembly to machine language
might pass unmodified through this level

Computer Level Hierarchy

Level 2: Machine

Instruction Set Architecture (ISA) particular to the architecture of this machine


programs written in machine language dont need compilation,
interpretation or any assembly

but given the one-to-one relation to assembly, only a lunatic would try
to code in pure machine language

Level 1: Control

the Control Unit ensures:

proper decoding and execution of instructions


correct movement of data

interprets instructions given to it one-at-a-time

Computer Level Hierarchy

Level 0: Digital Logic

physical level where the digital circuits reside


gates and wires

well be most concerned with those lower 3 levels

with some practical use of Assembly added in

von Neumann Architecture

named after John von Neumann, who published the first paper describing it
consists of three hardware systems:

a Central Processing Unit (CPU) with a control unit, an Arithmetic Logic


Unit (ALU), registers and a program counter

a Main Memory System, which holds program and data


an I/O System

has capacity to carry out sequential instruction processing


contains a single (physical or logical) path between the main memory and
control unit

forces alternation of instruction and execution cycles

Fetch-Execute Cycle
(or the von Neumann execution cycle) has 4 steps per iteration:
1. the control unit fetches the next program instruction from
memory, using the program counter to determine where the
instruction is located
2. the instruction is decoded into a language which the ALU can
understand
3. any data operands required to execute the instruction are
fetched from memory and placed into registers within the CPU
4. the ALU executes the instruction and places the results in
registers or memory

von Neumann Architecture

this single pathway creates the von Neumann Bottleneck


Figure 1.4, Chapter 1, Computer Organization & Architecture

von Neumann Bottleneck

program and data memory cannot be accessed at the


same time

throughput far smaller than the rate at which the CPU


can work

large impact on processing speed when large amounts


of data are handled with little processing

with increases in CPU and memory size the bottleneck


becomes worse

von Neumann Bottleneck

one remedy, the Harvard Architecture, separates data and


instructions into two pathways

most modern systems use a System Bus to alleviate:

Figure 1.5, Chapter 1, Computer Organization & Architecture

Non-von Neumann Architectures

examples include:

neural networks
cellular automata
quantum computation
parallel computers

Parallel Computers

most popular alternative to v-N


many varieties of this

many nodes: shared memory, distributed memory


multi-core systems

but they are often effectively cooperating von Neumann machines


with many processors, communication overhead can be high
and they are subject to Amdahls Law:

effectively, every algorithm has a sequential part that limits the


speedup available from multiprocessing

Common Prefixes
usually referring to these
- but not always

Table 1.1, Chapter 1, Computer Organization & Architecture

To Summarise

weve considered the basic definition of a computer


weve taken an overview of:

computer organization - how it works


computer architecture - how its designed
the computer level hierarchy

weve taken a brisk tour of the history of the computer


weve defined the von-Neumann architecture

weve surveyed its problems and alternative models

References

outline primarily based upon:

Chapter 1, Computer Organization & Architecture (3rd


Edition), Null & Lobur

other material and concepts used from:

Chapter 1, Fundamentals of Computer Architecture (3rd


Edition), Burrell

Chapter1, Computer Organization & Design, (Revised


4th Edition, Patterson & Hennessy)

You might also like