You are on page 1of 18

11.

Error Detection and Correction Codes:

Error is a condition when the output information does not match with the input
information. During transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from one system to other. That
means a 0 bit may change to 1 or a 1 bit may change to 0.

Error-Detecting codes:
Whenever a message is transmitted, it may get scrambled by noise or data may
get corrupted. To avoid this, we use error-detecting codes which are additional
data added to a given digital message to help us detect if an error occurred during
transmission of the message. A simple example of error-detecting code is parity
check.

Error-Correcting codes:
Along with error-detecting code, we can also pass some data to figure out the
original message from the corrupt message that we received. This type of code
is called an error-correcting code. Error-correcting codes also deploy the same
strategy as error-detecting codes but additionally, such codes also detect the
exact location of the corrupt bit.

In error-correcting codes, parity check has a simple way to detect errors along
with a sophisticated mechanism to determine the corrupt bit location. Once the
corrupt bit is located, its value is reverted (from 0 to 1 or 1 to 0) to get the
original message.

How to Detect and Correct Errors?


To detect and correct the errors, additional bits are added to the data bits at the
time of transmission.

 The additional bits are called parity bits. They allow detection or correction
of the errors.

 The data bits along with the parity bits form a code word.

Parity Checking of Error Detection


It is the simplest technique for detecting and correcting errors. The MSB of an 8-
bits word is used as the parity bit and the remaining 7 bits are used as data or
message bits. The parity of 8-bits transmitted word can be either even parity or
odd parity.
Even parity -- Even parity means the number of 1's in the given word including
the parity bit should be even (2,4,6,....).

Odd parity -- Odd parity means the number of 1's in the given word including
the parity bit should be odd (1,3,5,....).

Use of Parity Bit


The parity bit can be set to 0 and 1 depending on the type of the parity required.

 For even parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the
entire word is even. Shown in fig. (a).

 For odd parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the
entire word is odd. Shown in fig. (b).

How Does Error Detection Take Place?


Parity checking at the receiver can detect the presence of an error if the parity of
the receiver signal is different from the expected parity. That means, if it is known
that the parity of the transmitted signal is always going to be "even" and if the
received signal has an odd parity, then the receiver can conclude that the
received signal is not correct. If an error is detected, then the receiver will ignore
the received byte and request for retransmission of the same byte to the
transmitter.
De Morgan has suggested two theorems which are extremely useful in
12.a.
Boolean Algebra. The two theorems are discussed below.

Theorem 1

 The left hand side (LHS) of this theorem represents a NAND gate with inputs
A and B, whereas the right hand side (RHS) of the theorem represents an OR
gate with inverted inputs.

 This OR gate is called as Bubbled OR.

Table showing verification of the De Morgan's first theorem −


Theorem 2

 The LHS of this theorem represents a NOR gate with inputs A and B, whereas
the RHS represents an AND gate with inverted inputs.

 This AND gate is called as Bubbled AND.

Table showing verification of the De Morgan's second theorem −


12.b.BASIC THEOREMS OF BOOLEAN ALGEBRA:

It consists of six theorems of the Boolean algebra and the four of its

postulates. The notation is simplified by omitting, whenever this doesn't

lead to confusion. This theorem and the postulates listed in the table below

are the most basic theorem in the Boolean algebra. The theorems like the

postulates are listed in the pairs; each relation is dual with the one pair with

it. The postulates are the basic axioms of algebraic structure and the need

to proof. The theorem must be proven from the postulates. The proofs of

the variable with the examples are listed below. At the right is listed the

number of the postulates that justifies each step of the proof.

The below table shows the postulates and the theorem of the Boolean

algebra:

postulates 2 a)x+0 = x b) x.1 =x

Postulate 5 a)x + x' =1 b) x.x' = 0

Theorem 1 a) x +x = 1 b)x.x=x

Theorem 2 a) x + 1 = 1 b)x .0 =0

Theorem 3, involution a) (x' )'=x

Postulate 3 communicative a) x + y = y+ x b)xy=yx

Theorem 4 associative a) x+ (y+z) = (x+ y) + z b) x(yz)=( xy)z

Postulate 4 distributive a) x ( y + z)= xy + xz b) x + yz =( x+y) ( x+z)


Theorem 5 DeMorgan a) (x + y)' = x' y' b) (x y)'= x' +y'

Theorem 6 absorption a) x + xy = x b) x(x+y) = x

14.A. Design an excess-3 code to BCD converter

We know that, excess-3 code begins with the binary 0011(decimal 3) and it will continue up to
binary 1100(decimal 12) where I get the output binary 1001(decimal 9) for input binary
1100(decimal 12). So I need 4 variables as inputs and 4 variables as outputs. With 4 variables I can
represent 16 binary values from 0000 to 1111. Since I do not use 0, 1, 2, 13, 14, 15 as inputs, when I
simplify the output function I use those terms as don’t care conditions.
1)—

. We are used to using the base-10 number system, which is also called
decimal. Other common number systems include base-16 (hexadecimal),
base-8 (octal), and base-2 (binary).
Base-16 is also called hexadecimal. It’s commonly used in computer
programming, so it’s very important to understand. Let’s start with counting
in hexadecimal to make sure we can apply what we’ve learned about other
bases so far.
Understanding different number systems is extremely useful in many
computer-related fields. Binary and hexadecimal are very common, and I
encourage you to become very familiar with them

2—
The XOR ( exclusive-OR ) gate acts in the same way as the logical
"either/or." The output is "true" if either, but not both, of the inputs are
"true." The output is "false" if both inputs are "false" or if both inputs are
"true." Another way of looking at this circuit is to observe that the output is 1
if the inputs are different, but 0 if the inputs are the same.
Applications:
These type of logic gates are used in generation of parity generation and
checking units. The two diagrams below shows the even and odd parity
generator circuits respectively for a four data.
With the help of these gates parity check operation can be also performed.
The diagrams below show even and odd parity check.
5—
signed binary number can be represented in one of the three ways
1. Signed magnitude representation
2. 1’s complement representation
3. 2’s complement representation
Signed magnitude representation :
1. If the data has positive as well as negative numbers then the signed binary number should be used.
2. the + or – signs are represented in the form of binary by using 0 or 1. So 0 is used to represent the ( + )
sign and 1 is used to represent the ( – ) sign.
3. the MSB of a binary number is used to represent the sign and the remaining bits are used to represent
the magnitude.

7—

The JK Flip Flop is the most widely used flip flop. It is considered to be a
universal flip-flop circuit. The sequential operation of the JK Flip Flop is same
as for the RS flip-flop with the same SET and RESET input. The difference is
that the JK Flip Flop does not the invalid input states of the RS Latch (when S
and R are both 1).The JK Flip Flop name has been kept on the inventor name
of the circuit known as Jack Kilby.

The basic NAND gate RS flip-flop suffers from two main problems. Firstly, the
condition when S = 0 and R = 0 should be avoided. Secondly, if the state of
S or R changes its state while the input which is enabled is high, the correct
latching action does not occur. Thus to overcome these two problems of the
RS Flip-Flop, the JK Flip Flop was designed.

The JK Flip Flop is basically a gated RS flip flop with the addition of the clock
input circuitry. When both the inputs S and R are equal to logic “1”, the
invalid condition takes place. Thus to prevent this invalid condition, a clock
circuit is introduced. The JK Flip Flop has four possible input combinations
because of the addition of the clocked input. The four inputs are “logic 1”,
‘logic 0”. “No change’ and “Toggle”.

When both the J and K input are at logic “1” at the same time and the clock
input is pulsed HIGH, the circuit toggle from its SET state to a RESET or visa
verse. When both the terminals are HIGH the JK flip-flop acts as a T type
toggle flip-flop.

8-
The synchronous Ring Counter example above, is preset so that exactly one data bit in the
register is set to logic “1” with all the other bits reset to “0”. To achieve this, a “CLEAR”
signal is firstly applied to all the flip-flops together in order to “RESET” their outputs to a
logic “0” level and then a “PRESET” pulse is applied to the input of the first flip-flop ( FFA )
before the clock pulses are applied. This then places a single logic “1” value into the circuit
of the ring counter.
So on each successive clock pulse, the counter circulates the same data bit between the four
flip-flops over and over again around the “ring” every fourth clock cycle. But in order to
cycle the data correctly around the counter we must first “load” the counter with a suitable
data pattern as all logic “0’s” or all logic “1’s” outputted at each clock cycle would make the
ring counter invalid.
This type of data movement is called “rotation”, and like the previous shift register, the effect
of the movement of the data bit from left to right through a ring counter

9—

SRAM DRAM

It is a type of RAM. SRAM


essentially uses latches to It is also a type of RAM. DRAM
Definition store charge. makes use of capacitors to store
bits in the form of charge.

Speed Faster Slower

Size Bigger Smaller

Cost More expensive per bit Less expensive per bit

Requirement of
peripheral Comparatively less Comparatively more
circuitary

Type Comparatively less common Comparatively more common

Capacity (same
Less 5 to 10 times more than SRAM
technology)

Generally in smaller
applications like CPU cache Commonly used as the main
Applications
memory and hard drive memory in personal computers
buffers
Fast Page Mode DRAM
Asynchronous SRAM
Extended Data Out DRAM
Types Synchronous SRAM
Burst EDO DRSSM
Pipeline Burst SRAM
Synchronous DRAM

Access Easy Harder

Construction Difficult Simple

Power
Less More
Consumption

Low density/less memory per High density/more memory per


Density
chip chip

10—

 CPLDs are ideal for critical, high-performance control applications.


 CPLD can be used for digital designs which perform boot loader functions.
 CPLD is used to load configuration data for an FPGA from non-volatile
memory.
 CPLD are generally used for small designs, for example, they are used in
simple applications such as address decoding.
 CPLDs are often used in cost-sensitive, battery-operated portable
applications, because of its small size and low-power usage.

19—

Field-Programmable Gate Array (FPGA) is a semiconductor device containing


programmable logic components called "logic blocks", and programmable interconnects.
Logic blocks can be programmed to perform the function of basic logic gates such as AND,
and XOR, or more complex combinational functions such as decoders or mathematical
functions. In most FPGAs, the logic blocks also include memory elements, which may be
simple flip-flops or more complete blocks of memory.

FPGA consists of large number of "configurable logic blocks" (CLBs) and routing channels.
Multiple I/O pads may fit into the height of one row or the width of one column in the array. In
general all the routing channels have the same width
Block diagram-

CLB: The CLB consists of an n-bit look-up table (LUT), a flip-flop and a 2x1 mux. The value
n is manufacturer specific. Increase in n value can increase the performance of a FPGA.
Typically n is 4. An n-bit lookup table can be implemented with a multiplexer whose select
lines are the inputs of the LUT and whose inputs are constants. An n-bit LUT can encode
any n-input Boolean function by modeling such functions as truth tables. This is an efficient
way of encoding Boolean logic functions, and LUTs with 4-6 bits of input are in fact the key
component of modern FPGAs. The block diagram of a CLB is shown below.

Each CLB has n-inputs and only one input, which can be either the registered or the
unregistered LUT output. The output is selected using a 2x1 mux. The LUT output is
registered using the flip-flop (generally D flip-flop). The clock is given to the flip-flop, using
which the output is registered. In general, high fanout signals like clock signals are routed via
special-purpose dedicated routing networks, they and other signals are managed separately.

Routing channels are programmed to connect various CLBs. The connecting done according
to the design. The CLBs are connected in such a way that logic of the design is achieved.

Applications
 ASIC prototyping: Due to high cost of ASIC chips, the logic of the application
is first verified by dumping HDL code in a FPGA. This helps for faster and
cheaper testing. Once the logic is verified then they are made into ASICs.
 Very useful in applications that can make use of the massive parallelism
offered by their architecture. Example: code breaking, in particular brute-force
attack, of cryptographic algorithms.
 FPGAs are sued for computational kernels such as FFT or Convolution
instead of a microprocessor.
 Applications include digital signal processing, software-defined radio,
aerospace and defense systems, medical imaging, computer vision, speech
recognition, cryptography, bio-informatics, computer hardware emulation and
a growing range of other areas.

Types of Adder Circuits


16---

Adder circuits are classified into two types, namely Half Adder Circuit and Full Adder Circuit

Half Adder Circuit

The half adder circuit is used to sum two binary digits namely A and B. Half adder has two
o/ps such as sum and carry, where the sum is denoted with ‘S’ and carry is denoted with ‘C’.
The carrier signal specifies an overflow into the following digit of a multi-digit addition. The
value of the sum ‘S’ is 2C+S. The simplest design of half adder is shown below. The half adder
is used to add two i/p bits and generate a sum and carry which are called as o/ps. The i/p
variables of the half adder are termed as augend bits & addend bits, whereas the o/p variables
are termed as sum and carry.

Half Adder Circuit


Truth Table of Half Adder

The truth table of half adder is shown below, using this we can get the Boolean functions for
sum & carry. Here Karnal map is used to get the Boolean equations for the sum and carry of
the half adder.
Truth Table of Half Adder
Half Adder Logic Diagram

The logic diagram of half adder is shown below.If A & B are binary i/ps of the half adder, then
the Boolean function to calculate the sum ‘S’ is the XOR gate of inputs A and B. Logic
functions to calculate the carry ‘C’ is the AND gate of A and B. From the below half adder
logic diagram, it is very clear, it requires one AND gate and one XOR gate. The universal gates,
namely NAND and NOR gates are used to design any digital application. For example, here in
the below figure shows the designing of a half adder using NAND gates.

Full Adder Circuit

A full adder is used to add three input binary numbers. Implementation of full adder is
difficult compared with half adder. Full adder has three inputs and two outputs, i/ps are
A, B and Cin and o/p’s are sum ‘S’ and carry ‘Cout’. In three inputs of the full adder,
two i/ps A B are addend and augend, where third i/p Cin is carry on preceding digit
operation. The full adder circuit generates a two bit o/p and these are denoted with the
signals namely S and Cout. Where sum= 2XCout+S

Full Adder Circuit


Truth Table of Full Adder

The truth table of full adder circuit is shown below, using this we can get the Boolean
functions for sum & carry. Here Karnal map is used to get the Boolean equations for
the sum and carry of the full adder.

Truth Table of Full Adder


Full Adder Logic Diagram

This full adder logic circuit is used to add three binary numbers, namely A, B and C,
and two o/ps sum and carry. This full adder logic circuit can be implemented with two
half adder circuits. The first half adder circuit is used to add the two inputs to generate
an incomplete sum & carry. Whereas, a second half adder is used to add ‘Cin’ to the
sum of the first half adder to get the final output. If any half adder logic circuit generates
a carry, there will be an o/p carry. So output carry will be an OR function of the half
adder’s carry o/p. Take a look at the full adder logic circuit shown below.

Full Adder Logic Diagram


15a—

Ones Complement
The complement (or opposite) of +5 is −5. When representing positive
and negative numbers in 8-bit ones complement binary form, the
positive numbers are the same as in signed binary notation described
in Number Systems Module 1.4 i.e. the numbers 0 to +127 are
represented as 000000002 to 011111112. However, the complement
of these numbers, that is their negative counterparts from −128 to −1,
are represented by ‘complementing’ each 1 bit of the positive binary
number to 0 and each 0 to 1.

For example:

+510 is 000001012

−510 is 111110102

Notice in the above example, that the most significant bit (msb) in the
negative number −510 is 1, just as in signed binary. The remaining 7
bits of the negative number however are not the same as in signed
binary notation. They are just the complement of the remaining 7 bits,
and these give the value or magnitude of the number.

The problem with signed the binary arithmetic described in Number


Systems Module 1.4 was that it gave the wrong answer when adding
positive and negative numbers. Does ones complement notation give
better results with negative numbers than signed binary?

Fig. 1.5.1 Adding Positive & Negative Numbers in Ones Complement

Fig. 1.5.1 shows the result of adding −4 to +6, using ones


complement,(this is the same as subtracting +4 from +6, and so it is
crucial to arithmetic).

The result, 000000012 is 110 instead of 210.

This is better than subtraction in signed binary, but it is still not correct.
The result should be +210 but the result is +1 (notice that there has
also been a carry into the none existent 9th bit).

Fig. 1.5.2 shows another example, this time adding two negative
numbers −4 and −3.
Because both numbers are negative, they are first converted to ones
complement notation.

+410 is 00000100 in pure 8 bit binary, so complementing gives


11111011.

Fig. 1.5.2 Adding Positive & Negative Numbers in Ones Complement

This is −410 in ones complement notation.

+310 is 00000011 in pure 8 bit binary, so complementing gives


11111100.

This is −310 in ones complement notation.

The result of 111101112 is in its complemented form so the 7 bits after


the sign bit (1110111), should be re-complemented and read as
0001000, which gives the value 810. As the most significant bit
(msb) of the result is 1 the result must be negative, which is correct,
but the remaining seven bits give the value of −8. This is still wrong
by 1, it should be −7.

15b—

Binary Multiplication
Similar to the multiplication of decimal numbers, binary multiplication follows the
same process for producing a product result of the two binary numbers. The binary
multiplication is much easier as it contains only 0s and 1s. The four fundamental
rules for binary multiplication are

0×0=0
0×1=0
1×0=0
1×1=1
The multiplication of two binary numbers can be performed by using two common
methods, namely partial product addition and shifting, and using parallel multipliers.

Before discussing about the types, let us look at the unsigned binary numbers
multiplication process. Consider a two 4 bit binary numbers as 1010 and 1011, and
its multiplication of these two is given as

From the above multiplication, partial products are generated for each digit in the
multiplier. Then all these partial products are added to produce the final product
value. In the partial product multiplication, when the multiplier bit zero, the partial
product is zero, and when the multiplier bit is 1, the resulted partial product is the
multiplicand.

As similar to the decimal numbers, each successive partial product is shifted one
position left relative to the preceding partial product before summing all partial
products.

Therefore, this multiplication uses n-shifts and adds to multiply n-bit binary number.
The combinational circuit implemented to perform such multiplication is called as an
array multiplier or combinational multiplier.

17a—
17b—

asynchronous counter synchronous counter

Different flipflops are applied with


different clocks All flipflops are applied with same clock

It is slower in operation It is faster in operation

fixed count sequence either up or


down any count sequence is possible

produces decoding error produces no decoding error

You might also like