You are on page 1of 9

Copyright© 2007 by THA (aka.

Christopher Ma) for the Box network


This article is available from both NewOrder and Code’s file vault

Computing Fundamentals
Many programmers nowadays start programming from a high-level language like
C++ or Java or C#. They learn the basics of the language, learn about algorithms and
data structures, and pick up a few “tricks” along the way. Many of the programmers
unfortunately are ignorant to the lower level mechanics of the machine they’re
programming. In days when dinosaurs roamed the earth, you had to learn the low level
nitty gritty stuff, which took roughly a couple weeks, before even being able to write
your first program. This article series is meant to give you a general idea of what goes on
in the machine when you instruct it to do something. It will take a journey from the bits
and transistors and march to the top. Hopefully, the article will give you enough
information to read other books on the subject that cover’s this subject intensively.

In computing, it is very common to represent things in layers for various reasons.


If we apply this layer approach, we may get a version of the following:

Problem Statement
Algorithm
Implementation Language
Operating System
Assembly Language
Instruction Set Architecture
MicroArchitecture
Combinatorial Circuits
Logic Gates
Transistors
Stuff only science can explain
Table 1: The Layers

Many programmers are familiar with the top 3 layers but are ignorant of the lower
layers. The article will start from the next to last layer, the transistor, and move up in
subsequent articles. But before learning about transistors we have to take a prerequisite
sidetrack and learn about data representation, integer and bitwise operations.

At the most fundamental level, a computer only understands bits. No matter what
language you use or what run-time you run a program on, it all gets translated into binary
bits. A bit is the most fundamental piece of information in a computer. It is either on or
off, representing high and low electrical states (more on this in the transistor section).
Generally speaking a group of 8-bits is called a byte, 16-bits are called a word (although
this term does have a somewhat ambiguous meaning), 32 bits is a double-word, and 64-
bits is called a quad-word. The computer uses a binary (base 2) number system, so it
makes a bit of sense to cover this briefly.
The binary system in 5 minutes

We start with 2 digits:

0
1

Since a computer uses the binary system, this is all that it is capable of counting up to
using only these two digits. In the base-10 system, we have:

0
1
2
3
4
5
6
7
8
9

Ok, we can count to ten in the base-10 system, do we stop and say: “That’s all we can do
with base 10, since we ran out of fingers/symbols”. No, instead, we repeat that sequence
again except we place a 1 at the front:

10
11
12
13
14
15
16
17
18
19

Ok, so we got 10 more numbers, do we stop and say “That’s all we ran out of toes” or
“That’s all we’re capable of doing”? No, we repeat the original sequence and add a 2 at
the beginning:

20
21
22
23
24
25
26
27
28
29

When we get to 99, do we stop and give up? Being human, no, we apply the same
concept again: we repeat numbers 0 to 99 and add a 1 at the beginning (we do have to
add a place-holder for the first ten numbers so that they are two digits: 00, 01, 02,…, 09).

Going back to the base-2 system, to count beyond 2, we just do the same thing, we repeat
the sequence and add a 1 to the beginning:

10
11

And we get numbers 2 and 3. Applying this technique again, we repeat the original
sequence up to this point and add a one at the beginning getting:

100
101
110
111

And we get 4, 5, 6, and 7. Apply the technique again and we get:

1010
1011
1100
1101
1110
1111

Which is 10, 11, 12, 13, 14, and 15. Putting it all in chart form, we have:

Base 10 Base 2 Base 10 Base 2


(cont.) (cont.)
0 0000 8 1000
1 0001 9 1001
2 0010 10 1010
3 0011 11 1011
4 0100 12 1100
5 0101 13 1101
6 0110 14 1110
7 0111 15 1111
By this time, you should get the general idea. There is a mathematical formula
that allows you to convert between the different bases. But I surmise that the majority of
readers are either already familiar with it or can get access to that information rather
quickly and easily. Otherwise, if there’s demand, I can put all the information in a
separate article and still keep my 5 minute promise in this section.

There are a couple of tricks you may want to keep in mind when going from
binary to decimal. Look at the binary representation for 10 and 11, specifically, the two
low order bits. Also, look at numbers that are powers of 2 (2, 4, 8, 16, etc.). Look at
their binary counterpart and count the number of zeros, you’ll notice that it’s always 2n
where n = number of zeros (eg. Binary 1000, there are 3 zero’s so in base-10, it’s 23 or 8).
And also look at the binary numbers that have all 1’s. You’ll notice that it’s always one
less then a power of 2, so given n 1’s the decimal representation is 2n – 1 (eg. Binary 111,
there are 3 one’s so in base-10, it’s 23 – 1 or 7).

Integer representation: signed vs. unsigned, integer addition, and


overflow
We know that the binary system can represent any number and is just as capable
as the familiar base-10 system. However, in the base-10 system, we have a way to
represent negative numbers. Negative numbers are a valid representation and many
people work with them on a regular basis. So how can a machine represent negative
numbers? A system was devised based on two previous systems that existed and were in
use many years ago called the 2’s complement (two’s complement).

We say an integer is unsigned if it can only represent positive numbers. This is


what you’re familiar with. Signed means that it can represent either a positive or
negative number. To use the 2’s complement notation, you first have to determine
whether to interpret the number as signed or unsigned. If you’ve made the determination
to represent it as signed, then you have to determine whether the number is positive or
negative.

You first examine the high-order bit (called the sign-bit or the most-significant bit
or the leftmost bit). If the sign bit is 0, then the number is positive and you interpret it the
usual way. To find the magnitude when the sign bit is 1, you first invert all the bits
(make a 0 into a 1 and vice-versa, otherwise known as a bitwise NOT) and then add 1.

Binary addition is pretty simple since we are only dealing with two digits. If we
add a zero to a one (or vice-versa) the answer is always 1. Zero plus zero is always zero
and one plus one is three.

0+0 0
0+1 1
1+0 1
1+1 11
Table 2: Binary addition (results are all in binary)

Some examples of interpreting signed numbers:

Given 1011, interpret it as a signed number.

Since we are told to interpret it as a signed number, we examine the high order bit, which
is a 1, telling us it’s a negative number. To find the magnitude, we invert the bits getting
0100 and then add a 1 giving us 0101 or 5. So the answer is -5.

Given 1000, interpret it as a signed number.

The high-order bit tells us it is a negative number, so we invert the bits which gives us
0111. We add 1 which gives us 1000 (yes the original number) and the answer is -8.

Given 0001, interpret it as a signed number.

We examine the sign bit and it tells us that it’s a zero, so it’s a positive number; we
interpret it the usual way, which gives us the answer 1.

Convert -1 to its binary signed representation

1 is represented as 0001. We then perform the 2’s complement operation as usual (it’s a 2
way operation), we invert all the bits (giving us 1110), and add 1, giving us the final
result: 1111.

We know how to add two numbers, but what about subtraction? If you remember
from pre-algebra, subtraction is simply adding the opposite. That is, 13 – 5 is the same as
13 + (-5). We know how to represent negative numbers and we know how to add in
binary, so it looks like we don’t have any problems. 13 -> 01101 and -5 -> 11011 and
adding them together, 01101 + 11011, gives us: 01000 (8) with an extra 1 as an overflow
(overflow will be covered in next 2 paragraphs). So we see why the 2’s complement
notation is used, it gives us the correct answer when we add and subtract two numbers.
Just for reference, the predecessor to the 2’s-complement type was the 1’s-
complement, which is like the 2’s complement except it didn’t add a 1 in the final step,
and the signed-magnitude type, which simply used the high-order bit to determine
whether to interpret an integer as signed or unsigned. Both of these data types were used
at one point back on early machines. Both required special hardware to do addition, so
the 2’s-complement type was devised.

Sometimes when we add two numbers together, we get an extra digit, the carry or
overflow bit. An overflow means that the result could not be fully expressed given the
number of output bits available. For instance, if we are limited to 1-bit inputs and 1 bit
for output, then given 1 + 1 = 10, the result is 0 but with a 1 as the carry or overflow. It's
pretty easy to determine if there's an overflow when adding unsigned numbers, just check
if the result has a carry. For signed numbers, you have to use a more subtle test. To
detect signed overflow, if you add a positive number and a negative number, then the
result cannot overflow. If you add two negative numbers and the result is positive, then
the result has overflowed. Similarly, if you add two positive numbers and the result is a
negative number then you have an overflow.

Finally, we cover some basic bitwise operations. There are 3 fundamental


operations that you should be familiar with: AND, OR, and NOT. These operations
basically allow a computer to do any operation a user chooses. Bitwise AND and bitwise
OR is a binary operation while bitwise NOT is a unary operation.

• Bitwise NOT inverts a bit. That is, if a bit is 0, then it changes it to a 1 and vice-
versa.
• Bitwise AND takes two bits and performs a so-called logic AND operation.
o If the two bits are 1, then the result is 1. (1 AND 1 = 1)
o Any other combination produces a 0.
 1 AND 0 = 0
 0 AND 1 = 0
 0 AND 0 = 0
• Bitwise OR takes two bits and performs a so-called logic OR operation
o If the two bits are 0, then the result is 0. (0 OR 0 = 0)
o Any other combination gives a 1
 1 OR 1 = 1
 1 OR 0 = 1
 0 OR 1 = 1

Transistors and Logic gates

Whenever a new processor is released, inevitably, there is always a mention of the


number of transistors that are on the chip, breaking the record of the previous processor.
A transistor is simply a device that acts as a switch. It is not unlike the light switch you
have in your own home. The chief difference is that a light switch is a mechanical
switch while a transistor is controlled electrically.

Although there are many types of transistors, they all work on a similar principle.
There is a power source that comes in one end of the transistor and goes out another end.
Which end it goes out depends on whether the switch (sometimes called the gate) inside
the transistor is open or closed. A conceptual rendering is shown below.
Figure 1: A schematic rendering of a transistor

When a certain amount of voltage, say between 2-3 volts, is supplied to the base
(from now on we’ll call IN), then the gate is closed and power flows from VCC (source)
to Emitter (ground). When there is very little voltage supplied from IN, say 0-1 volts, the
gate is open and power flows from the source to the Collector (herein called OUT). How
much power is necessary to affect the gate is highly dependant on the transistor (type of
transistor, material used to construct it, etc). For our purposes we will neatly ignore the
amount of current necessary in order to affect the gate. Instead, we will use the logical
equivalents: if the gate is closed and current is allowed to flow to OUT, we will denote a
one at the output. If the gate is open and current is directed to ground, then we will
denote a zero at the output.

Logic gates are constructed from transistors. Logic gates can be seen as the next
step up from transistors. They allow us to construct circuits that implement the bit
operations in the previous section (bitwise NOT, AND, and OR). If you look carefully,
Figure 1 is also your very first logic gate. It is called an inverter. If a 0 is being pushed
from IN, then a 1 will be pushed on OUT, likewise, if a 1 is pushed from IN, then a 0 will
be pushed on OUT. If you look carefully, the operation of the inverter models after the
bitwise NOT function.

If you take two transistors and connect them in a series circuit, then you will have
something like the image below.

Figure 2: A NAND gate


The above is called a NAND (Not AND) gate. If there is power coming in from
both IN 1 and IN 2, then both the gates are closed and power flows from source to ground
and no power flows to OUT. Any other combination will have power flow from source
to OUT. If we use the logical equivalents, then we say that if we push a 1 into both IN 1
and IN 2, then a 0 is pushed onto OUT. Any other combination that we push into IN1
and IN2 will push a 1 onto OUT.

IN 1 IN 2 OUT
0 0 1
0 1 1
1 0 1
1 1 0

If we wire the transistors in a parallel circuit, then we will get another logic gate
shown below, called the NOR (Not OR) gate. If there is no power coming from both IN
1 and IN 2, then both the transistor gates are open and current flows from source to OUT.
Any other combination would have current flow from source to either ground. Again, if
we use the logical equivalent, if we push a 0 into both IN 1 and IN 2, then a 1 will be
pushed onto OUT. Any other combination being pushed into IN 1 or IN 2 will result in a
0 being pushed onto OUT.

Figure 3: A NOR gate

IN 1 IN 2 OUT
0 0 1
0 1 0
1 0 0
1 1 0
Figure 4: Truth table for a NOR gate

The three gates that you see above are the three basic building blocks for building
any type of digital logic. If we route the output of either the NOR gate or the NAND gate
through an inverter, then we get the corresponding OR gate and AND gate.
Figure 5: An AND gate

Figure 6: An OR gate

As you can see, AND gates and OR gates require 3 transistors each. This is one
reason why many circuits are built using NAND gates and NOR gates. They are simpler
and require fewer transistors. However, when designing circuits on paper, AND gates
and OR gates are initially used since they are logically simpler to work with for the
human mind.

To simplify the process of designing circuits (without having to constantly draw


all these transistors), the following symbols are used to represent NOT, AND, OR,
NAND, and NOR. The little circles at the outputs of the NOT, NAND and NOR gate is
called an inversion bubble. It’s commonly used to represent an inverted signal.

Figure 7: Symbols representing the 5 logic gates

In order to use these gates to design any meaningful combinatorial circuits, we


will need to go into some more basic foundations called Boolean Algebra. Fortunately,
Cygnum from NewOrder has written an article that covers this, titled: Designing Digital
Systems.

You might also like