You are on page 1of 14

Historical Development of Computers

We are living in the computer age. Most of our day to day jobs are being influenced by the
use of computers. It is used increasingly in each and every field of our life. In the areas of
science and technology improvements can not be achieved without the use of computers.
Hence it has become necessary to have basic knowledge about computers.

Strictly speaking, computer is a calculating device having certain important characteristics


like speed, storage capacity, accuracy etc. But, now days it is used for many more
applications other than computing. It has become an indispensible tool in the field of
communications.

History of Computers:

Historians start the history of calculations with the abacus, a wooden frame with balls or
beads strung on parallel wires. But, principally first such machine having principles of
today’s computing machines, was developed by Charles Babbage in Nineteenth Century. It
had certain basic ideas of stored computer programs in the machine. Such a machine was
devised by Babbage in the year 1822 and was called difference engine. It was used to
perform simple arithmetic computation needed for setting up of trigonometric and
logarithmic tables. Further he developed and analytical engine around 1871 that was a
prototype computer.

Meanwhile an important theoretical development occurred, around 1850, when Geroge


Boole, a mathematician developed an algebraic system which is now called as Boolian
Algebra. This Boolian algebraic system is used to represent quantities as binary numbers i.e
0s and 1s and also represent and manipulate logical expressions.

The significance of Boolian Algebra was not utilized at that time. In the nineteenth Century,
around 1880, Hollerith developed techniques and machine that had significant impact on the
future design of computers. He designed a machine in which data was represented in the
form of punched holes on paper cards. This machine could work with punched cards and
handled 50-80 punched cards per minute. The punched cards contained 80 columns and
rectangular punches. These machines were called tabulators. These machines were also
used for semiautomatic selection and sorting of cards. He set up his own company
“Computer Tabulating Recording Company” which eventually became International Business
Machine Corporation (IBM). Today, IBM is one of the largest companies in the computer
world.

Early Computers:

In, 1937, Howard Alken, of Harward University, designed a huge mechanical calculator
called MARK I with a number of switches, mechanical relays and cards. The size was 15X
2.4 m X 0.6 m. This was the immediate predecessor of automatic electronic computers.
ENIAC (Electronic Numerical Integrator and Calculator) designed in 1946was the first
electronic calculator. It occupied a room of 15X 9m and its weight was 30 tons. It was water
cooled and much faster than MARKXI.

Around 1950, a computer named EDVAC (Electronic Discrete Automatic Computer) was
designed which was based on Neumann’s idea. (Frequently referred to as father of modern
computer) He was first to use stored programme concept in computers. The storage
capacity of EDVAC was 1024 words of 44 bits each. It also had an auxiliary storage of
20,000 words.

First Generation of Computers (1946-55):

The computers manufactured between 1945 -55 are called first Generation Computers.
They were extremely large in size with vacuum tubes in their circuitry which generated
considerable heat. Hence, special air conditioning arrangements were required to dissipate
this heat.

They were extremely slow and their storage capacity was also very less compared to today’s
computers. In these computers punched cards were used to enter data in to the computer.
These were cards with rectangular holes punched in them using some punching devices.
UNIVACI was the first commercially available computer, built in 1951 by Remington Rand
Company. It had storage capacity of about 2000 words. These were used mostly for payroll,
billing and some mathematical computing.

Second Generation Computers (1956-1965):

The computers, in which vacuum tubes were replaced by transistors made from
semiconductors, were called second generation computers. The use of transistors reduced
the heat generated during the operation. It also decreased the size and increased storage
capacity. It required less power to operate and were much faster than first generation
computers. Magnetic media was being used as an auxiliary storage of data. These
computers used high level languages for writing computer programs. FORTRAN and COBOL
were the languages used.

Third Generation Computers (1966-1976):

The third generation computers started in 1966 with incorporation of integrated circuits (IC)
in the circuitry. IC is a monolithic circuit comprising a circuitry equivalent to tens of
transistors on a single chip of semiconductor having a small area a number of pins for
external circuit connections.
IBM 360 series computers in this generation had provision for facilitating time sharing and
multiprograms also.

These were small size and cost effective computers compared to Second generation
computers. Storage capacity and speed of these computers was increased many folds as
include user friendly package programs, word processing and remote terminals. Remote
terminals could use central computer facilities and get the result, instantaneously.

Fourth Generation Computers:

Fourth Generation Computers were introduced after 1976 and in these computers electronic
components were further miniaturized through Large Scale Integration (LSI) techniques
Microprocessor which are programmable Ics fabricated using LSI technique are used in
these computers. Micro computers were developed by combing microprocessor with other
LSI Chips, with compact size, increased speed and increased storage capacity. In recent
days, Ics fabricated using VLSI (Very Large Scale Integration) techniques are used in
Computers. Through this techniques, the storage capacity is increased many folds. Not only
that, the speed of these computers is also very high as compared to earlier computers.

During 1980s, some computers called as super computers were introduced in the market.
These computers perform operation with exceptionally high speed (approx 100 million
operations per sec). This speed is attained by employing number of microprocessors
consequently there cost is also very high. These are normally used in very complex
application like artificial intelligence etc.

Computer
Hardware

Computer is an electronic machine capable of quickly performing complex calculations on


the data as per the instruction. The present age may be termed as computer age because
people use computers in every walk of life. No other invention has revolutionized the world
as much as the computers. It has becomes an indispensable tool in every sphere of human
life due to its speed, reliability and accuracy in addition to its ability to store large amount of
data and with work accounting to instruction from the user.

A basic understanding of the manner in which a computer works helps a person in today’s
world to appreciate the utility and limitation of this powerful toll. One may use this
knowledge to reduce human drudgery and improve the quality of service.

Here we will discuss about:

1. Types of computers
2. The organization of a computer system
3. Features of a computer

Types of Computers

Computers have been classified in to two types, namely special purpose computers and
general purpose computers, according to their use. One may also classify them as Analogue
and Digital computers according to their basic engineering design. Modern computers are all
digital computers.

A) General Purpose Computers:

General purpose computers are designed to meet the needs of many different applications
like simulations solving mathematical equation, payroll personal dada base, word processing
and many more similar applications. These computers are broadly categorized as micro
computers, mini computers, mainframe computers and super computers.

1) Micro Computers:

Micro computers (Personal Computers) are designed for use by one person at a time. These
are cheap in cost, easy to use and can be used even at homes. Though, single user system,
they can be linked to other computer systems, hence they form a very important segment
of the integrated information system.

2) Mini Computers:

A mini computer is usually designed to serve multiple users simultaneously and is used for
large volume applications.

3) Main Frame Computers:

Main frame computers are used in applications like weather forecasting, space applications,
banking etc. They support a large number of terminals for use by a variety of users
simultaneously.

4) Super Computers:

Super computers are computers with extremely large storage capacity and very high
possessing speeds which are at least ten times faster than other computers. These
computers are used for large scale numerical problems in scientific and engineering
applications, global weather forecasting, defence applications and geographical information
systems.

With recent advantages in microelectronics, microcomputers have become very powerful


with regard to speed, capacity of peripherals and mass storage devices.

B) Special Purpose:

Special purpose computers are designed and built solely to cater to the requirement of a
particular task of application and either incorporated inside or connected to other devices or
machines.

The most common example of special purpose computer is a washing machine. A fully
automatic washing machine has a built-in computer. This receives instruction through few
switches on the control panel and works accordingly. The sensors in the machine keep
telling the computer about the weight of clothes, water level, shaking time etc. accordingly,
the computer has to take a few decision and control the operation and switch-off when the
task is complete.

Automatic teller machine (ATM) is another example of a special purpose computer.

The Organization of Computer System

A system is a collection of items bound together by well defined relationships. In this sense,
a computer is referred to as a system. By organization we mean listing the constituents and
bring out their inter-relationship. We will restrict our attention to overall organization of
digital computers, in brief and only those components which are present in almost every
computer, in some form or the other.

The basic elements of computer systems are Hardware, Software, Human ware and
firmware.
Hardware:

Hardware of a computer involves physical units of the machine with its electronic and
mechanical functional parts. The layout of these functional parts is called the architecture of
computer, showing how they are connected together.

Software:

Software is as vital for effective use of computer as Hardware. Instructions are to be given
to the computer for doing a particulate job. A set of such instruction to be executed
sequentially to perform a specific task is called a program. A collection of programs is
referred to as software.

Human Ware:

A group of personnel associated with various stages, from manufactured to actual use, of a
computer is known as Human ware. These are actually interfaced between a machine and
the end user. It might include the following personnel.

Sr.
Type Functions
No
1 Hardware Engineers Design, Fabrication & Maintenance of Computer System.
Studies the problem and prepares the solution and
2 System Analyst
program specifications.
3 Programmer Writes Computer Programs.
4 Operator Operates the Computer.

Firmware:

Software which available as part of hardware is called as firmware. Computer can retrieve
and use this software but cannot modify it easily. These are the programs stored in ROM
chip. This ROM chip is affixed on the motherboard of the computer. Thus, it is a part of CPU.

Architecture of a Computer System

The fundamental parts of a computer are a Central Processing Unit CPU), Input and Output
Devices and Mass Storage Devices.

Let us explain the need and functions of each of the components.

CPU is the heart of the computer. It consists of three major units Arithmetic Logic Unit
(ALU), control unit and primary memory or main memory. ALU performs all arithmetic and
logical operations on data in accordance with the instructions. The function of control unit is
like the nervous system of human body. It supervises all operations in CPU. It takes up
each instruction from the programme and interprets it. It moves appropriate data from the
memory to ALU and gets the required operation done on the data. It then transfers results
back to the memory. It also communicates with input, output and other peripheral devices.
When a job is being executed on the computer, the data as well as instruction are kept in
the primary storage. This storage is a high speed memory called Random Access Memory
(RAM). Secondary storage devices are used to store large amounts of data and programs.
They are generally the magnetic media called floppy disks, hard disks and tapes.

We need input and output devices to communicate with the CPU. Data as well as
instructions are fed through output devices. Once the data is processed by the CPU, the
results are passed on to the user through an output device.

A computer is designed to perform variety of tasks. However, it is supposed to be doing


what it is generally desired to do by users. Hence, arises a problem of communication
between the user and the computer. A description of the task to be performed by the
computer is fed or given as input to it through an input device. A computer could have one
or more input devices. However, the description of a task may be fed through only one
device at a time.

If somebody were to tell us how to solve a quadratic equation, he would give a description
of the method of solution. This description is fed to us through our ears which act as input
device. Where does this description go after fed through our ears? We know it resides in the
storage cells of our brain. Analogously, the description (information) fed to the computer
through an input device, is stored in the memory of the computer.

After the task description is fed and stored in the memory of the computer, it is the central
Processing Unit (CPU) that interprets this and the operation needed to perform the task as
per description are executed by the CPU. These operations include arithmetic operations like
addition, subtraction, multiplication and division. It can also perform variety of other
operations like logical operations, controlling flow of data/ information, coordinating
operations by all the devices connected to CPU etc.

Now that the specified task is performed by the computer, it must let us know what are the
answers to the problem we gave it for solution. This is accomplished through an output
device. The results can be displayed, printed or stored in some other form. The results
obtained after solving the problem are generally known as output form the computer.

The description of the task to be performed, data to be operated up on, the output results
can all be stored in Mass Storage Devices for further use, whenever needed.

These summaries the functions of all the essential component in a computer system.

Information Flow With in a Computer

The information that flows within a computer can be classified as

1. Programs and Data


2. Control Information

Programs and Data:

A program is what we have referred to earlier as description of the task to be performed by


a computer. Data refers to a set of values assumed by the variables in Fthe program. For
example, if we write a program to solve a quadratic equation ax2+bx+c=0, then the
particular set of values of a b and c form the data for this programs. Thus, if one desires to
solve a particular quadratic equation then he needs to feed in both programs and the data
for that particular equation.

Programs and data enter the computer through an input device and get stored in the
memory. The data which come in through on output devices is known as out put data or
simply output from the computer.

Whenever any arithmetic operation is to be performed on the input data, it is to be


transformed from memory to ALU. The arithmetic operation is performed to the users
through on output device.

Control Information:

There is need to control the flow of instruction and appropriate data from memory to CPU.
This requires various devices within a computer to behave in a controlled manner. This is
accomplished by the control unit in CPU. The control unit controls various devices in the
computer by sending them information in the form of controls signals. It can also ascertain
the present status of the devices by getting status signals from the devices. For example,
the control unit has to ascertain whether the output device is ready before signaling it to
carry out the desired work (say printing etc). The controls unit controls these devices in
accordance with the instruction in the user programs.

Input and Output Devices

A wider variety of input and output devices are used for communication with the computer.
We will describe some commonly used input and output devices.

Keyboard:

Keyboard is the most commonly used input devices. A keyboard is used to enter information
and instruction in to a computer. It consists of a set of keys similar to that used in a
typewriter. It has some special keys like Ctrl, All, Esc, return, function keys etc. in addition
to those in a typewriter. These keys have special functions. The layout is similar to that of a
typewriter.

Mouse:

Mouse is also an input device which provides a keyboard for entry of instruction. It is a hand
held printing devices which can be used to move the cursor on the screen of VDU and
required action is chosen by processing a button on the back of the mouse.

Microphone:

A microphone is used with sound to record speech and other sounds.

Scanner:

A scanner reads graphics and text in to a computer; scanners are available in various sizes.
VDU (Monitor):

Video Display Unit is the most commonly used output device. It is similar to a Television
which uses Cathode Ray Tub (CRT) for display. Information (text and images) can be
quickly displayed on the screen of a VDU. Several types of VDU’s are used in a computer
e.g. CGA, VGA, EGA, etc. These are available with both colour black and white screen. The
size of a monitor is measured diagonally across the screen. A screen of such VDU generally
contains 24 lines of text with 80 characters in each line.

Printers:

Printer is an output device which facilitates printing of the output on paper, one line of
character at a time. Varieties of printers are available with various printing speeds and
quality of printing. Some are briefed below:

Plotters:

Plotters are useful for producing high quality line graphs, maps and plans. They are
available in many sizes. A plotters draws lines on paper using pens. Some models can
produce coloured output by selecting coloured pens from a pen holder. The required pen is
picked up from the stand and moved on the paper under software control. A combination of
movements creates pictures and graphs. It is a slow device.

Speakers:

Speakers are used to listen sounds created by a sound card in a computer.

Types of Printer

1) Dot Matrix Printer:

Dot matrix Printers are often used to print multipart forms. It prints a character in the form
of a group of dots called matrix. If you look very closely at the characters printed on a DMP,
you can see dots.

The print head consists of printing pins arranged in the form of a matrix. A signal from the
computer causes certain pins to hit the ribbon, creating a pattern of dots a dots and hence a
character on the paper. The print head goes on printing the characters by shifting itself in a
appropriate manner. The quality of printing is determined by the number of pins on the on
the printer head. More the number of pins better is the quality of printing. Dot matrix
printer with two types of print heads, one with 9 pins and other with 24 pins are available.
The 24 pin DMP has facility to print in colour. The speed of printing of these type of printers
is about 240 – 300 characters per second (cps).

Line Printer:
Basically line printer is also a DMP but, the number of heads used is more than one. Thus, a
single line is printed by a number of heads simultaneously increasing the speed of printing.
The printing speed of these printers is of the order of 60 -100 lines per minute. (lpm).

Inkjet Printers:

In inkjet printers, characters are printed on a paper by squirting ink through an array of tiny
nozzles of an ink cartridge costs more than ribbon. Because there is no impact noise from
the pins striking the paper, inkjet printers are quite.

Laser Printers:

Laser printers have become increasingly popular where high quality is required. Laser
printer operates much like a photocopier. A laser beam scans a bit mapped image of the
page on to an electrostatic drum, in a process called exposure. Toner is then attracted to
the charged areas, in a process called development. It is then transferred electro statically
to the paper, and fused by using hot pressure rollers.

Laser printers are more complex than other types of printers, and consequently expensive.
In recent years, both their size and cost is decreasing.

I/O Devices (Input and Output devices)

Some devices work as both input as well Output devices. Examples are Floppy Disk
Drive, Hard Disk Drive, Computer Disk Drive (CD), Cassette Tape Drive, Modem, etc.
Floppies, Cassette Tapes or CD’s require respective drive (for mechanical movement of the
media) to be interfaced to the CPU for reading or writing data in to them.

FDD:

In a FDD, the drive motor rotates disk at a speed of 300 – 360 rpm and the read/write head
moves in contact with the recording surface of the disk. These heads work much like those
in an audio cassette recorder. Diskette drive contain several sensor viz. write protect
sensor, diskette sensor, media type sensor, etc. All PCs have a FDD controller which
interfaces it with CPU.

HDD:

The principle of working of Hard disk is same as that of a floppy diskette except for the fact
that it is a stack of diskette. Each disk (platter) has recording surface on either side. It has
more than two read/write heads. Hard disks can store a large amount of data/information.
The maximum capacity in recent days is of the order of 4 GB. (About 3000 times that of a
normal floppy) They are much more reliable than floppy disks.

CTD:

The mechanism and principle of working of Cartridge Tape Drive (CTD) is similar to the one
in an audio cassette recorder. The data/information can be recorded in the form of digitized
signal on the magnetic media and read from the magnetic cassettes driven by these CTD.
The storage capacity varies from 200 MG to 2 GB.

A compact disk (CD) is optical media in the form of a disk which is used to store
data/information. A Compact Disk Drive (CDD) is needed to drive the CD. The storage
capacity is approx 1000 times that of a floppy diskette.

Information can be written on storage media using these devices. Similarly, it can be read
also. Thus, these devices are I/O devices.

Modem:

A modem lets computers exchange information through telephone lines. One can use a
modem to connect ones computer to another. One can use the facility to access internet
and send and receive email message & faxes.

Memory of Computer

Memory is an integral part of every computer. It is used for storing data and programs
which can be used at a later time, whenever required. For the sake of understanding,
imagine memory to be like an array of switches as shown below

Location Contents
0
1
2 10100110
3
4

The switches in the array are organized in a number of rows each having 8 columns. Thus,
each row has 8 switches. Each switch can either ON or OFF at any instant of time. This
means that the switches that constitute a memory are two state devices i.e. ON and OFF. In
the ON state a switch may be thought of representing a 1 and in OFF state a 0. Thus, each
switch may be thought of as a smallest unit of memory capable of storing a 0 or 1 and it is
called as a bit. As represented in the above table each row has 8 switches i.e. at any instant
of time, a particular row of switches may represent a pattern say 01011011 called a word. A
group of 8 bits is called byte. A world length may be more than 1 byte, depending on the
type of CPU. Variety of CPU’s are there which handle word lengths from 1 bytes to 8 bytes.

Normally a single alphanumeric character requires 1 byte if memory for its storage.

The size of memory is specified in terms of BYTES. When the memory size is large, it is
specified, for convenience, in multiples of BYTE as shown below.

1 Kilo Byte (KB) = 2^10 Byte = 1024 Bytes.


1 Mega Byte (MB) = 2^10 KB = 1024 KB
1 Giga Byte (GB) = 2^10 MB = 1024 MB

Types of Memories
Memories used in computers can be classified as primary memory and secondary or
auxiliary memory.

1) Primary Memory:

Every computer has a certain amount of primary memory. It is in the form of semiconductor
memory ICs fixed on the CPU board itself. The size of this type of memory varies from 640
KB to 64 MB for a particular computer. There is always a limit on the size of primary
memory a computer can have as all the ICs are to be fixed on the CPU card which will
increase the size of the motherboard.

Primary memories consist mainly of Random Access Memory (RAM) and Read Only Memory
(ROM) or Erasable Programmable Read Only Memory (EPROM) in the form of semiconductor
ICs, associated with CPU.

RAM:

It is read & writes type of memory. It is used by the CPU as scratch pad for temporary
storage of programs, data, intermediate results during computations, outputs etc. during
execution of a program. When power is executed off, the contents of this type of memory
are lost. The amount of RAM in a PC varies from 640 KB to 64 MB. The first 640 KB are
known as basic block, next up to 1 MB is known as extended memory and remaining is
called expanded memory block.

ROM:

It is a read only type of memory. Information stored can only be read from it. The
information in the form of programs or data is stored at the time of fabrication of the IC
itself and the contents can not be changed afterword.

EPROM:

It is Erasable Programmable Read Only Memory in the form of semiconductor ICs. The
contents are normally read only but can be changed if needed. For this, the existing
contents are first erased by exposing the chip to Ultra Violet light and then the required
information in the form of program instructions & data are stored into it using special
EPROM programming devices. In computers, the BIOS is supplied by the manufacturer in an
EPROM chip.

2) Auxiliary Memory:

The secondary or auxiliary type of memory consists mainly of magnetic media used for
mass towage of information. It consists of floppy Diskettes, Hard Disks and cassette tapes.

Magnetic floppies are available in two sizes 5.25” with a storage capacity of 1.2 MB and 3.5”
with storage capacity of 1.44 MB. These are read/ written by CPU through a read/ write
head in the floppy Disk Drive. Hard Disks have comparatively large storage capacity. The
storage capacity of Hard disks ranges from 200 MB to 4 Giga bytes. Magnetic cassettes are
also used for mass storage of data and the capacity ranges form 150 MB to 2 GB. In recent
days, compact disks (CD) are also used for data storage. These are optical media and
driven in a CDD. A special device called CD cutter is required for writing data on to a CD.

Sr No Type Size Capacity


Floppy
1 Diskettes 5.25" 1.2MB
1.44
2 Hard Disk 3.5 MB,2.88MB
Magnetic 200 MB to
3 Tape 2.1 GB
Compact
4 Disk GB

Features of a Computer

High speed, accuracy large storage capacity, high reliability and versatility are some of the
important features of computers.

Speed:

It has a very speed of executing instruction. CPU of a computer can perform more than 10
million operations per second. All the instructions are executed in accordance with a clock,
whose frequency is measured in Mhz. Normally, 3-4 cycles of this clock are required to
execute one instruction. Recent computers have a speed of about 300 Mhz i.e one cycle of
approx.3 X 10-9 Sec. This means that it can execute an instruction in about 10 nanosec
(10X 10 -8 Sec). In other words it can execute 100 million instructions in one second. But
the overall speed of performance of a computer decreases due to slower Input and Output
devices, interfaced to CPU.

Storage:

The speed with which computers can process large quantities of data/ Information, the size
of input so also the output is quite large. The size of information to be stored further
increases due to graphic applications. All this information is to be stored in auxiliary
memory i.e Hard Disk fitted inside the computer. Hard Disks now days have a storage
capacity as large as 4 GB. The size of internal primary memory (RAM) has also been
increases a lot to about 64 MB.

Accuracy:

The accuracy of results computed by a computer is consistently high. Due to digital


techniques the error is very small. The errors in computing may be due to logical mistakes
by a programmer or due to inaccurate data.

Reliability:

The reliability of results processed by a computer is very high. If a program is executed any
number of times with the same set of data, every time the results would be the same.
Versality:

Computers are capable of performing almost task provided the task can be reduced to a
series of logical steps so that an appropriate program in a suitable language can be fed to a
computer memory. Ofcourse, the input and output devices should be capable of performing
the desired task. Because of these capabilities, a number of processes can be automated
with the help of a computer.

Apart from those outlined above, computer has some other features also. They are
automatic to a great extent i.e they run with very little human interference. They can work
endless at the same level of efficiency and productivity. Modern computers are becoming
more and more user friendly i.e computer itself helps the user at every stage. Visual
display, limited but effective use of natural language like English and appropriate software
have made it very easy to operate computers.

Binary Coding System

We normally use a decimal number system in our day to day life. It has the digits 0, 1,
2,….. .8, 9 as symbols for representing numbers. It is a positional number system with
representing data. It has only two symbols 0 and 1 (ON and OFF of a switch) called bits or
binary digits. In computers, therefore, numbers are represented using these two digits only
and this system of representation of data is called as binary coding system.

Even though computer has only two symbols 0 and 1, it can represented all the numeric
quantities, alphanumeric characteristics (numbering around 60), by using a set of these
bits. For example, a set of 3 such bits can represent 8 different characters because it is
possible to form 8 different combinations from 8 bits of more. Even if uses 8 bits of more.
Even if it uses 8 bit word, it can have 28 =256 such combinations and hence it can very
easily represent all these alphanumeric characters.2

In computers, each alphanumeric character is coded in the form of a separate set of bits.
This code is termed as Binary Code, Arithmetic operations in computer use numbers coded
in Binary from. Binary number system is also a positional number system with two symbols
0 and 1 with appropriate positional values. The positional values are found by raising the
base of the number system to the power of the position.

Any combination of 0s and as a valid binary number. It can be converted to its decimal
equivalent by multiplying each digit with its positional value as illustrated below:

(1101)2 = 1X2^3 + 1 X 2^2 X 0 X 2^1 + 1X2^0 = (13)10

Similarly a decimal integer can be converted in to its binary equivalent y repeated division
by 2, as illustrated below:

Example: Conversion of 25 to binary form

Number to be Divided
by 2 Quotient Reminder
25 12 1 LSB
12 6 0
6 3 0
3 1 1
1 0 1 MSB

Collecting the reminders and starting binary number with last digit, we find that

(25)10 = (11001)2

Binary Fractions

Just as we use decimal point for representing fractional decimal numbers, binary point is
used to represent a fraction in binary numbers. The positional values in this case are 2^-
1,2^-2,2^-3,….. for successive positions starting from binary point to its right.

For example,

(0.111)2 = 1 x 2^-1 + 1 x 2^-2 + 1 x 2^-3


= 1 x 0.5 + 1 x 0.25 + 1 x 0.125
= (0.875)10

To convert decimal fraction to binary fraction, following rules are used:

Multiply decimal fraction repeatedly by 2. The whole number part of the result gives the first
bit of the binary fraction. The above procedure is repeated with fractional part of the result
and so on.

Example: Convert (0.375)10 to binary fraction

0.375 x 2 = 0.750 -------- 0


0.750 x 2 = 1.50 -------- 1
0.50 x 2 = 1.00 --------- 1

Hence (0.375) 10 = (0.011)2

You might also like