You are on page 1of 49

Analog and digital signals are used to transmit information, usually through electric signals.

In both these technologies, the information, such as any audio or video, is transformed into electric signals. The difference between analog and digital technologies is that in analog technology, information is translated into electric pulses of varying amplitude. In digital technology, translation of information is into binary format (zero or one) where each bit is representative of two distinct amplitudes.

Comparison chart</> Embed this chart


Analog
Signal

Digital

Analog signal is a continuous signal Digital signals are discrete time signals which represents physical generated by digital modulation. measurements. Waves Denoted by sine waves Denoted by square waves Uses continuous range of values to Uses discrete or discontinuous values to Representation represent information represent information Human voice in air, analog Computers, CDs, DVDs, and other digital Example electronic devices. electronic devices. Analog technology records Samples analog waveforms into a limited Technology waveforms as they are. set of numbers and records them. Subjected to deterioration by noise Data Can be noise-immune without deterioration during transmission and write/read transmissions during transmission and write/read cycle. cycle. Response to More likely to get affected reducing Less affected since noise response are Noise accuracy analog in nature Digital hardware is flexible in Flexibility Analog hardware is not flexible. implementation. Can be used in analog devices only. Best suited for Computing and digital Uses Best suited for audio and video electronics. transmission. Applications Thermometer PCs, PDAs There is no guarantee that digital signal Analog signal processing can be processing can be done in real time and Bandwidth done in real time and consumes less consumes more bandwidth to carry out the bandwidth. same information. Memory Stored in the form of wave signal Stored in the form of binary bit Analog instrument draws large Digital instrument drawS only negligible Power power power Cost Low cost and portable Cost is high and not easily portable Impedance Low High order of 100 megaohm Analog instruments usually have a Digital instruments are free from scale which is cramped at lower end Errors observational errors like parallax and and give considerable observational approximation errors. errors.

Definitions of Analog vs Digital signals


An Analog signal is any continuous signal for which the time varying feature (variable) of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. It differs from a digital signal in terms of small fluctuations in the signal which are meaningful. A digital signal uses discrete (discontinuous) values. By contrast, non-digital (or analog) systems use a continuous range of values to represent information. Although digital representations are discrete, the information represented can be either discrete, such as numbers or letters, or continuous, such as sounds, images, and other measurements of continuous systems.

Properties of Digital vs Analog signals


Digital information has certain properties that distinguish it from analog communication methods. These include

Synchronization digital communication uses specific synchronization sequences for determining synchronization. Language digital communications requires a language which should be possessed by both sender and receiver and should specify meaning of symbol sequences. Errors disturbances in analog communication causes errors in actual intended communication but disturbances in digital communication does not cause errors enabling error free communication. Errors should be able to substitute, insert or delete symbols to be expressed. Copying analog communication copies are quality wise not as good as their originals while due to error free digital communication, copies can be made indefinitely. Granularity for a continuously variable analog value to be represented in digital form there occur quantization error which is difference in actual analog value and digital representation and this property of digital communication is known as granularity.

Differences in Usage in Equipment


Many devices come with built in translation facilities from analog to digital. Microphones and speaker are perfect examples of analog devices. Analog technology is cheaper but there is a limitation of size of data that can be transmitted at a given time. Digital technology has revolutionized the way most of the equipments work. Data is converted into binary code and then reassembled back into original form at reception point. Since these can be easily manipulated, it offers a wider range of options. Digital equipment is more expensive than analog equipment.

Comparison of Analog vs Digital Quality

Digital devices translate and reassemble data and in the process are more prone to loss of quality as compared to analog devices. Computer advancement has enabled use of error detection and error correction techniques to remove disturbances artificially from digital signals and improve quality.

Differences in Applications
Digital technology has been most efficient in cellular phone industry. Analog phones have become redundant even though sound clarity and quality was good. Analog technology comprises of natural signals like human speech. With digital technology this human speech can be saved and stored in a computer. Thus digital technology opens up the horizon for endless possible uses.

APPLICATION AND ADVANTAGE OF ANALOG SIGNAL


Advantages and Disadvantages of Analog Signal
Advantages The main advantage is the fine definition of the analog signal which has the potential for an infinite amount of signal resolution. Compared to digital signals, analog signals are of higher density.Another advantage with analog signals is that their processing may be achieved more simply than with the digital equivalent. An analog signal may be processed directly by analog components, though some processes aren't available except in digital form. Disadvantages The primary disadvantage of analog signaling is that any system has noise i.e., random unwanted variation. As the signal is copied and re-copied, or transmitted over long distances, these apparently random variations become dominant. Electrically, these losses can be diminished by shielding, good connections, and several cable types such as coaxial or twisted pair.The effects of noise create signal loss and distortion. This is impossible to recover, since amplifying the signal to recover attenuated parts of the signal amplifies the noise (distortion/interference) as well. Even if the resolution of an analog signal is higher than a comparable digital signal, the difference can be overshadowed by the noise in the signal. HDTV is a great advance in home entertainment, however, during the transition period from analog to digital, there are still many consumers that are watching mostly analog television programs on their new HDTVs. This has generated a lot of complaints about the apparent degraded picture quality of analog television signals when viewed on an HDTV. Analog Television signals, both broadcast and cable, as well as VHS, in most cases, will look worse on an HDTV than they do on a standard analog television.

The reason for this is that HDTVs have the capability of displaying much more detail than an analog TV. This results in the video processing circuitry in the HDTV enhancing both the good and bad parts of a low resolution image. The cleaner and more stable the signal, the better result you will have. However, if the picture has background color noise, signal interference, color bleeding, or edge problems, (which may be unnoticeable on an analog TV due to the fact that it is more forgiving due to the lower resolution) the video processing in an HDTV will attempt to clean it up. However, this may deliver mixed results. Another factor that contributes to the quality of analog television display on HDTVs also depend on the types of video processing circuitry employed by different HDTV makers, and some HDTVs perform the analog-to-digital conversion process better than others. When checking out HDTVs or reviews of HDTVs, make note of any comments regarding analog signal quality. Another important point to be made, is that most consumers that are upgrading to HDTV are also upgrading to a larger screen size. This means that as the screen gets larger, lower resolution images look worse, in much the same way as blowing up a photograph until shapes and edges become less defined. In other words, what looked really great on that old 27-inch TV, isn't going to look quite as good on that new 42-inch Plasma TV. Here are some suggestions: 1. Make sure you have the cleanest analog signal possible - or, better, switch to Digital Cable, HD Cable, or HD Satellite. If you have a high performance HDTV, why waste your money by supplying it with an inferior signal source - you are paying for HD capability - you should reap the rewards. 2. If you have an HD-cable box or HD satellite box, connect them to the HDTV using HDMI, DVI, or Component Video connections (whichever is type of connection is used by the cable or satellite box to transfer HDTV and Digital signals), rather than a standard RF connection. 3. Keep in mind that all over-the-air analog broadcast television signals will end on June 12, 2009, and you may have to switch to digital cable, or HD-cable, at that time anyway.

Binary, Decimal and Hexadecimal Numbers

Decimals
To understand Binary and Hexadecimal numbers, it is best to know how Decimal Numbers work.

Every digit in a decimal number has a "position", and the decimal point helps us to know which position is which. The position just to the left of the point is the "Units" position. Every position further to the left is 10 times bigger, and every position further to the right is 10 times smaller:

Now, this is just a way of writing down a value. Other ways include Roman Numerals, Binary, Hexadecimal, and more. You could even just draw dots on a sheet of paper! The Decimal Number System is also called "Base 10". Because it is based on the number 10. And there are 10 symbols (0,1,2,3,4,5,6,7,8 and 9), but notice something interesting: there is no symbol for "ten". "10" is actually two symbols put together, a "1" and a "0": In decimal you count "0,1,2,3,4,5,6,7,8,9,..." but then you run out of symbols! So you add 1 on the left and then start again at 0: 10,11,12, ...

Counting with Different Number Systems


But you don't have to use 10 as a "Base". You could use 2 ("Binary"), 16 ("Hexadecimal"), or any number you want to! Example: In binary you count "0,1,..." but then you run out of symbols! So you add 1 on the left and then start again at 0: 10,11 ... See how you would count dots using Bases from 2 to 16 in this little demonstration:

Try this: select a Base, watch it count for a while, then press "||" (Pause). Now see if it has tallied the right number of dots, as in this example using base 2: Example: 116 + 18 + 11 = 16+8+1 = 25

So the general rule is: Count up until just before the "Base", then start at 0 again, but first you add 1 to the number on your left.

Binary Numbers
Binary Numbers are just "Base 2" instead of "Base 10". So you start counting at 0, then 1, then you run out of digits ... so you start back at 0 again, but increase the number on the left by 1. Like this:
000 001 010 011 start back at 0 again, and add one to the number on the left... 100 ... but that number is already at 1 so it also goes back to 0 ... ... and 1 is added to the next number on the left there is no "2" in binary, so start back at 0 ... ... and add one to the number on the left

101 110 etc...

Hexadecimal Numbers
Hexadecimal numbers are interesting. There are 16 of them! They look the same as the decimal numbers up to 9, but then there are the letters ("A',"B","C","D","E","F") in place of the decimal numbers 10 to 15. So a single Hexadecimal digit can show 16 different values instead of the normal 10 like this:
Decimal: Hexadecimal: 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 10 11 12 13 14 15 9 A B C D E F

Binary, hexadecimal, and octal refer to different number systems. The one that we typically use is called decimal. These number systems refer to the number of symbols used to represent numbers. In the decimal system, we use ten different symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. With these ten symbols, we can represent any quantity. For example, if we see a 2, then we know that there is two of something. For example, this sentence has 2 periods on the end.. When we run out of symbols, we go to the next digit placement. To represent one higher than 9, we use 10 meaning one unit of ten and zero units of one. This may seem elementary, but it is crucial to understand our default number system if you want to understand other number systems. For example, when we consider a binary system which only uses two symbols, 0 and 1, when we run out of symbols, we need to go to the next digit placement. So, we would count in binary 0, 1, 10, 11, 100, 101, and so on. This article will discuss the binary, hexadecimal, and octal number systems in more detail and explain their uses.
Fold Table of Contents How a Number System Works Binary

Octal Hexadecimal Conversion Base to Decimal Decimal to Base How? Conversion. From decimal to binary From binary to decimal From decimal to hexadecimal. From hexadecimal to decimal From decimal to octal From octal to decimal Fun Facts End

How a Number System Works


Number systems are used to describe the quantity of something or represent certain information. Because of this, I can say that the word "calculator" contains ten letters. Our number system, the decimal system, uses ten symbols. Therefore, decimal is said to be Base Ten. By describing systems with bases, we can gain an understanding of how that particular system works. When we count in Base Ten, we count starting with zero and going up to nine in order.

0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
Once we reach the last symbol, we create a new placement in front of the first and count that up.

8, 9, 10, 11, 12, , 19, 20,

This continues when we run out of symbols for that placement. So, after 99, we go to 100.

The placement of a symbol indicates how much it is worth. Each additional placement is an additional power of 10. Consider the number of 2853. We know this number is quite large, for example, if it pertains to the number of apples in a basket. That's a lot of apples. How do we know it is large? We look at the number of digits. Each additional placement is an additional power of 10, as stated above. Consider this chart.
103 digit 102 101 100 digit digit digit

*1000 *100 *10 *1

Each additional digit represents a higher and higher quantity. This is applicable for Base 10 as well as to other bases. Knowing this will help you understand the other bases better.

Binary
Binary is another way of saying Base Two. So, in a binary number system, there are only two symbols used to represent numbers: 0 and 1. When we count up from zero in binary, we run out of symbols much more frequently.

0, 1,
From here, there are no more symbols. We do not go to 2 because in binary, a 2 doesn't exist. Instead, we use 10. In a binary system, 10 is equal to 2 in decimal. We can count further.
Binary 0 1 10 11 100 101 110 111 1000 1001 1010 Decimal 0 1 2 3 4 5 6 7 8 9 10

Just like in decimal, we know that the more digits there are, the larger the number. However, in binary, we use powers of two. In the binary number 1001101, we can create a chart to find out what this really means.
26 25 24 23 22 21 20

1 0 0 1 1 0 1 64+0+0+8+4+0+1 87

Since this is base two, however, the numbers don't get quite as large as it does in decimal. Even still, a binary number with 10 digits would be larger than 1000 in decimal.

The binary system is useful in computer science and electrical engineering. Transistors operate from the binary system, and transistors are found in practically all electronic devices. A 0 means no current, and a 1 means to allow current. With various transistors turning on and off, signals and electricity is sent to do various things such as making a call or putting these letters on the screen. Computers and electronics work with bytes or eight digit binary numbers. Each byte has encoded information that a computer is able to understand. Many bytes are stringed together to form digital data that can be stored for use later.

Octal
Octal is another number system with less symbols to use than our conventional number system. Octal is fancy for Base Eight meaning eight symbols are used to represent all the quantities. They are 0, 1, 2, 3, 4, 5, 6, and 7. When we count up one from the 7, we need a new placement to represent what we call 8 since an 8 doesn't exist in Octal. So, after 7 is 10.
Octal 0 1 2 3 4 5 6 7 10 11 12 17 20 30 77 100 Decimal 0 1 2 3 4 5 6 7 8 9 10 15 16 24 63 64

Just like how we used powers of ten in decimal and powers of two in binary, to determine the value of a number we will use powers of 8 since this is Base Eight. Consider the number 3623 in base eight.
83 82 81 80 3 6 2 3

1536+384+16+3

1939

Each additional placement to the left has more value than it did in binary. The third digit from the right in binary only represented 23-1, which is 4. In octal, that is 83-1 which is 64.

Hexadecimal
The hexadecimal system is Base Sixteen. As its base implies, this number system uses sixteen symbols to represent numbers. Unlike binary and octal, hexadecimal has six additional symbols that it uses beyond the conventional ones found in decimal. But what comes after 9? 10 is not a single digit but two Fortunately, the convention is that once additional symbols are needed beyond the normal ten, letters are to be used. So, in hexadecimal, the total list of symbols to use is 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. In a digital display, the numbers B and D are lowercase. When counting in hexadecimal, you count 0, 1, 2, and so on. However, when you reach 9, you go directly to A. Then, you count B, C, D, E, and F. But what is next? We are out of symbols! When we run out of symbols, we create a new digit placement and move on. So after F is 10. You count further until you reach 19. After 19, the next number is 1A. This goes on forever.
Hexadecimal 9 A B C D E F 10 11 19 1A 1B 1C 9F A0 Decimal 9 10 11 12 13 14 15 16 17 25 26 27 28 159 160

Digits are explained as powers of 16. Consider the hexadecimal number 2DB7.
163 162 161 160 2 D B 7

8192+3328+176+7 11703

As you can see, placements in hexadecimal are worth a whole lot more than in any of the other three number systems.

Conversion
It is important to know that 364 in octal is not equal to the normal 364. This is just like how a 10 in binary is certainly not 10 in decimal. 10 in binary (this will be written as 102 from now on) is

equal to 2. 108 is equal to 8. How on earth do we know this? What is 20C.38F16, and how do we find out? Here is why it is important to understand how the number systems work. By using our powers of the base number, it becomes possible to turn any number to decimal and from decimal to any number.

Base to Decimal
So, we know that 3648 is not equal to the decimal 364. Then what is it? There is a simple method in converting from any base to the decimal base ten. If you remember how we dissected the numbers above, we used powers, such as 24, and ended up with a number we understand. This is exactly what we do to convert from a base to decimal. We find out the true value of each digit according to their placement and add them together. In a formula, this algorithm looks like:
(1)

V10=vpBp+vp1Bp1+...+v1B+v0
Where V10 is the decimal value, v is the digit in a placement, p is the placement from the right of the number assuming the rightmost placement is 0, and B is the starting base. Do not be daunted by the formula! We are going to go through this one step at a time. So, let us say we had the simple hexadecimal number 2B. We want to know what this number is in decimal so that we can understand it better. How do we do this? Let us use the formula above. Define every variable first. We want to find V10, so that is unknown. The number 2B16 has two positions since it has two digits. p therefore is one less than that1, so p is 1. The number is in base 16, so B is 16. Finally, we want to know what v is, but there are multiple v's. You have v1 and v0. This refers to the value of the digit in the subscripted position. v1 refers to the digit in position one (the second digit from the right). So, v1 is 2. v0 is the first digit which is B. In the case of the conversion, you must convert all the letters to what they are in decimal. B is 11 in decimal, so v0 is 11. Now, plug all this into the formula:
(2)

V10=2(161)+11(160)V10=2(16)+11(1)V10=32+11V10=43
Therefore, 2B16 is equal to 43.

Now, let me explain how this works. Remember how digit placement affects the actual value? For example, in the decimal number 123, the "1" represents 100 which is 1*102. The "2" is 20, or 2*101. Likewise, in the number 2B16, the "2" is 2*161, and the B is 11*160. We can determine the value of numbers in this way. For the number 3648, we will make a chart that exposes the decimal value of each individual digit. Then, we can add them up so that we have the whole. The number has three digits, so starting from the right, we have position 0, position 1, and position 2. Since this is base eight, we will use powers of 8.
82 81 80 3 6 4

Now, 82 is 64. 81 is 8. 80 is 1. Now what? Remember what we did with the decimal number 123? We took the value of the digit times the respective power. So, considering this further
3*64 6*8 4*1 192 48 4

Now, we add the values together to get 244. Therefore, 3648 is equal to 24410. In the same way that for 123, we say there is one group of 100, two groups of 10, and three groups of 1, for octal and the number 364, there are three groups of 64, six groups of 8, and four groups of 1.

Decimal to Base
Just like how we can convert from any base to decimal, it is possible to convert decimal to any base. Let us say that we want to represent the number 23610 in binary, octal, and hexadecimal. What we need to do is pretty much reverse whatever we did above. There isn't really a good formula for this, but there is an algorithm that you can follow which will help accomplish what we want.
(3)

(1)LetP=int(VB)(2)Letv=int(VBP)(visthenextdigittotheright)(3)MakeV=V vBp(4)Repeatsteps1through3untilp=0

This algorithm may look confusing at first, but let us go through an example to see how it can be used. We want to represent 236 in binary, octal, and hexadecimal. So, let's try getting it to binary first. The first step is to make p equal to int(VB). B is the base we want to convert to which is 2. The V is the number we want to convert, 236. Essentially, we are taking the square root of 236 and disregarding the decimal part. Doing this makes p become 7. Step two says to let v equal our number V divided by Bp. Bp is 27, or 128, and the integer part of 236 divided by 128 is 1. Therefore, our first digit on the left is 1. Now, we actually change V to become V minus the digit times the Bp. So, V will now be 236-128, or 108. We simply repeat the process until the p becomes a zero. When p becomes zero, we complete the steps a last time and then end. So, since V is now 108, p becomes 6. 108 divided by 26 is 1. The 1 goes to the right of the 1, so now we have 11. V becomes 44 since 108-64 is 44.

How?
Now you might be asking yourself how to read these numbers. Well, thats not so difficult. First, Ill give a general mathematical explanation which can be fit into one formula:
(4)

V=vBP
In human language: the value of the cipher in the number is equal to the value of the cipher on its own multiplied by the base of the number system to the power of the position of the cipher from left to right in the number, starting at 0. Read that a few times and try to understand it. Thus, the value of a digit in binary doubles every time we move to the left. (see table below) From this follows that every hexadecimal cipher can be split up into 4 binary digits. In computer language: a nibble. Now take a look at the following table:
Binary Numbers 8 0 0 4 0 0 2 0 0 1 0 1 Hexadecimal Value Decimal Value 0 1 0 1

0 0 0 0 0 0 1 1 1 1 1 1 1 1

0 0 1 1 1 1 0 0 0 0 1 1 1 1

1 1 0 0 1 1 0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1 0 1 0 1 0 1

2 3 4 5 6 7 8 9 A B C D E F

2 3 4 5 6 7 8 9 10 11 12 13 14 15

Another interesting point: look at the value in the column top. Then look at the values. You see what I mean? Yeah, youre right! The bits switch on and off following their value. The value of the first digit (starting from the right), goes like this: 0,1,0,1,0,1,0,1,0,1, Second digit: 0,0,1,1,0,0,1,1,0,0,1,1,0,0 Third digit (value=4): 0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1, And so on Now, what about greater numbers? Therefore well need an extra digit. (but I think you figured that out by yourself). For the values starting from 16, our table looks like this:
Binary Numbers 16 8 4 2 1 Hexadecimal Value Decimal Value 1 1 0 0 0 0 0 0 0 1 10 11 16 17

1 1 1 1 1 1 1 1 1 1 1 1 1 1

0 0 1 0 0 0 1 1 0 1 0 0 0 1 0 1 0 1 1 0 0 1 1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 1 1 1

12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F

18 19 20 21 22 23 24 25 26 27 28 29 30 31

For octals, this is similar, the only difference is that we need only 3 digits to express the values 1->7. Our table looks like this:
Binary Numbers 4 0 0 0 0 0 0 1 1 2 0 1 0 1 1 Octal Value Decimal Value 0 1 2 3 0 1 2 3

1 1 1 1

0 0 1 1

0 1 0 1

4 5 6 7

4 5 6 7

Conversion.
In the latter topic I explained the logic behind the binary, hexadecimal and octal number systems. Now Ill explain something more practical. If you fully understood the previous thing you can skip this topic.

From decimal to binary


Step 1: Check if your number is odd or even. Step 2: If it's even, write 0 (proceeding backwards, adding binary digits to the left of the result). Step 3: Otherwise, if it's odd, write 1 (in the same way). Step 4: Divide your number by 2 (dropping any fraction) and go back to step 1. Repeat until your original number is 0.

An example: Convert 68 to binary:


68 is even, so we write 0. Dividing 68 by 2, we get 34. 34 is also even, so we write 0 (result so far - 00) Dividing 34 by 2, we get 17. 17 is odd, so we write 1 (result so far - 100 - remember to add it on the left) Dividing 17 by 2, we get 8.5, or just 8. 8 is even, so we write 0 (result so far - 0100) Dividing 8 by 2, we get 4. 4 is even, so we write 0 (result so far - 00100) Dividing 4 by 2, we get 2. 2 is even, so we write 0 (result so far - 000100) Dividing 2 by 2, we get 1. 1 is odd, so we write 1 (result so far - 1000100) Dividing by 2, we get 0.5 or just 0, so we're done. Final result: 1000100

From binary to decimal

Write the values in a table as shown before. (or do so mentally)

Add the value in the column header to your number, if the digit is turned on (1). Skip it if the value in the column header is turned off (0). Move on to the next digit until youve done them all.

An example: Convert 101100 to decimal:


Highest digit value: 32. Current number: 32 Skip the "16" digit, its value is 0. Current number: 32 Add 8. Current number: 40 Add 4. Current number: 44 Skip the "2" and "1" digits, because their value is 0. Final answer: 44

From decimal to hexadecimal.


THIS IS ONLY ONE OF THE MANY WAYS!

Convert your decimal number to binary Split up in nibbles of 4, starting at the end Look at the first table on this page and write the right number in place of the nibble

(you can add zeroes at the beginning if the number of bits is not divisible by 4, because, just as in decimal, these dont matter) An example: Convert 39 to hexadecimal:

First, we convert to binary (see above). Result: 100111 Next, we split it up into nibbles: 0010/0111 (Note: I added two zeroes to clarify the fact that these are nibbles) After that, we convert the nibbles separately. Final result: 27

From hexadecimal to decimal


*Check the formula in the first paragraph and use it on the ciphers in your hexadecimal number. (this actually works for any conversion to decimal notation) An example: Convert 1AB to decimal:

Value of B = 16011. This gives 11, obviously Value of A = 16110. This gives 160. Our current result is 171. Value of 1 = 1621. This gives 256. Final result: 427

From decimal to octal


Convert to binary. Split up in parts of 3 digits, starting on the right. Convert each part to an octal value from 1 to 7

Example: Convert 25 to octal


First, we convert to binary. Result: 11001 Next, we split up: 011/001 Conversion to octal: 31

From octal to decimal


Again, apply the formula from above Example: convert 42 to decimal

Value of 2=802=2 Value of 4=814=32 Result: 34

Fun Facts
OK, these may not be 100% "fun", but nonetheless are interesting.

Do you tend to see numbers beginning with 0x? This is common notation to specify hexadecimal numbers, so you may see something like:

0x000000 0x000002 0x000004

This notation is most commonly used to list computer addresses, which are a whole different story.

This is pretty obvious, but you can "spell" words using hexadecimal numbers. For example: o CAB = 3243 in decimal notation.

End
Did you understand everything? If you think so, test yourself:
Bin Dec Hex

3A

76

101110

88 47

1011110

Make some exercises yourself, if you want some more.


Footnotes 1. It is one less because the rightmost position is 0, not 1. So p is always one less than the number of digits.

LOGIC GATES POSITIVE AND NEGATIVE LOGIC

Positive Logic: With reference to positive logic, logical 1 state is the most positive logic or voltage level and logic 0 state is the most negative logic or voltage level. In other words, active high level is 1 and active low level is 0. For instance, V(0) = 0V and V(1) = 5V, V(0) = 5V and V(1) = 15V.

Negative Logic: With reference to negative logic, logic 0 state is the most positive logic or

voltage level and logic 1 state is the most negative logic or voltage level. In other words, active high level is 0 and active low level is 1. For instance, V(0) = 5V and V(1) = 0V, V(0) = 15V and V(1) = 5V.

Thus a positive logic AND gate acts as a negative logic OR gate and vice versa. Hope you find the information presented here useful. Please leave your footprints in the comments section below for any feedback or queries. Editors Note: This is the second in a four-part mini-series on different ways of looking at logical representations. This is abstracted from the book Bebop to the Boolean Boogie (An Unconventional Guide to Electronics) with the kind permission of the publisher. The topics in this mini-series are as follows: Part 1 Assertion-Level Logic Part 2 Positive vs Negative Logic Part 3 Reed Muller Logic Part 4 Gray Codes

The terms positive logic and negative logic refer to two conventions that dictate the relationship between logical values and the physical voltages used to represent them. Unfortunately, although the core concepts are relatively simple, fully comprehending all of the implications associated with these conventions requires an exercise in lateral thinking sufficient to make even the strongest amongst us break down and weep! Before plunging into the fray, it is important to understand that logic 0 and logic 1 are always equivalent to the Boolean logic concepts of False and True, respectively (unless you're really taking a walk on the wild side, in which case all bets are off). The reason these terms are used interchangeably is that digital functions can be considered to represent either logical or arithmetic operations (Fig 1).

1. Logical versus arithmetic views of a digital function. Having said this, it is generally preferable to employ a single consistent format to cover both cases, and it is easier to view logical operations in terms of "0s" and "1s" than it is to view arithmetic operations in terms of "Fs" and "Ts". The key point to remember as we go forward is that logic 0 and logic 1 are logical concepts that have no direct relationship to any physical values. Physical-to-abstract mapping (NMOS logic) OK, let's gird up our loins and meander our way through the morass one step at a time. The process of relating logical values to physical voltages begins by defining the frames of reference to be used. One absolute frame of reference is provided by truth tables, which are always associated with specific functions (Fig 2).

2. Absolute relationships between truth tables and functions. Another absolute frame of reference is found in the physical world, where specific voltage levels applied to the inputs of a digital function cause corresponding voltage responses on the outputs. These relationships can also be represented in truth table form. Consider a logic gate constructed using only NMOS transistors (Fig 3).

3. The physical mapping of an NMOS logic gate.

With NMOS transistors connected as shown in Fig 3, an input connected to the more negative Vss turns that transistor OFF, and an input connected to the more positive Vdd turns that transistor ON. The final step is to define the mapping between the physical and abstract worlds; either 0v is mapped to False and +ve is mapped to True, or vice versa (Fig 4).

4. The physical to abstract mapping of an NMOS logic gate. Using the positive logic convention, the more positive potential is considered to represent True and the more negative potential is considered to represent False (hence, positive logic is also known as positive-true). By comparison, using the negative logic convention, the more negative potential is considered to represent True and the more positive potential is considered to represent False (hence, negative logic is also known as negative-true). Thus, this circuit may be

considered to be performing either a NAND function in positive logic or a NOR function in negative logic. (Are we having fun yet?)

ALGEBRA
A mathematician named DeMorgan developed a pair of important rules regarding group complementation in Boolean algebra. By group complementation, I'm referring to the complement of a group of terms, represented by a long bar over more than one variable. You should recall from the chapter on logic gates that inverting all inputs to a gate reverses that gate's essential function from AND to OR, or vice versa, and also inverts the output. So, an OR gate with all inputs inverted (a Negative-OR gate) behaves the same as a NAND gate, and an AND gate with all inputs inverted (a Negative-AND gate) behaves the same as a NOR gate. DeMorgan's theorems state the same equivalence in "backward" form: that inverting the output of any gate results in the same function as the opposite type of gate (AND vs. OR) with inverted inputs:

A long bar extending over the term AB acts as a grouping symbol, and as such is entirely different from the product of A and B independently inverted. In other words, (AB)' is not equal to A'B'. Because the "prime" symbol (') cannot be stretched over two variables like a bar can, we are forced to use parentheses to make it apply to the whole term AB in the previous sentence. A bar, however, acts as its own grouping symbol when stretched over more than one variable. This has profound impact on how Boolean expressions are evaluated and reduced, as we shall see. DeMorgan's theorem may be thought of in terms of breaking a long bar symbol. When a long bar is broken, the operation directly underneath the break changes from addition to multiplication, or vice versa, and the broken bar pieces remain over the individual variables. To illustrate:

When multiple "layers" of bars exist in an expression, you may only break one bar at a time, and it is generally easier to begin simplification by breaking the longest (uppermost) bar first. To illustrate, let's take the expression (A + (BC)')' and reduce it using DeMorgan's Theorems:

Following the advice of breaking the longest (uppermost) bar first, I'll begin by breaking the bar covering the entire expression as a first step:

As a result, the original circuit is reduced to a three-input AND gate with the A input inverted:

You should never break more than one bar in a single step, as illustrated here:

As tempting as it may be to conserve steps and break more than one bar at a time, it often leads to an incorrect result, so don't do it! It is possible to properly reduce this expression by breaking the short bar first, rather than the long bar first:

The end result is the same, but more steps are required compared to using the first method, where the longest bar was broken first. Note how in the third step we broke the long bar in two places. This is a legitimate mathematical operation, and not the same as breaking two bars in one step! The prohibition against breaking more than one bar in one step is not a prohibition against breaking a bar in more than one place. Breaking in more than one place in a single step is okay; breaking more than one bar in a single step is not. You might be wondering why parentheses were placed around the sub-expression B' + C', considering the fact that I just removed them in the next step. I did this to emphasize an

important but easily neglected aspect of DeMorgan's theorem. Since a long bar functions as a grouping symbol, the variables formerly grouped by a broken bar must remain grouped lest proper precedence (order of operation) be lost. In this example, it really wouldn't matter if I forgot to put parentheses in after breaking the short bar, but in other cases it might. Consider this example, starting with a different expression:

As you can see, maintaining the grouping implied by the complementation bars for this expression is crucial to obtaining the correct answer. Let's apply the principles of DeMorgan's theorems to the simplification of a gate circuit:

As always, our first step in simplifying this circuit must be to generate an equivalent Boolean expression. We can do this by placing a sub-expression label at the output of each gate, as the inputs become known. Here's the first step in this process:

Next, we can label the outputs of the first NOR gate and the NAND gate. When dealing with inverted-output gates, I find it easier to write an expression for the gate's output without the final inversion, with an arrow pointing to just before the inversion bubble. Then, at the wire leading out of the gate (after the bubble), I write the full, complemented expression. This helps ensure I don't forget a complementing bar in the sub-expression, by forcing myself to split the expressionwriting task into two steps:

Finally, we write an expression (or pair of expressions) for the last NOR gate:

Now, we reduce this expression using the identities, properties, rules, and theorems (DeMorgan's) of Boolean algebra:

The equivalent gate circuit for this much-simplified expression is as follows:

REVIEW DeMorgan's Theorems describe the equivalence between gates with inverted inputs and gates with inverted outputs. Simply put, a NAND gate is equivalent to a Negative-OR gate, and a NOR gate is equivalent to a Negative-AND gate. When "breaking" a complementation bar in a Boolean expression, the operation directly underneath the break (addition or multiplication) reverses, and the broken bar pieces remain over the respective terms. It is often easier to approach a problem by breaking the longest (uppermost) bar before breaking any bars under it. You must never attempt to break two bars in one step!

Complementation bars function as grouping symbols. Therefore, when a bar is broken, the terms underneath it must remain grouped. Parentheses may be placed around these grouped terms as a help to avoid changing precedence.

Converting truth tables into Boolean expressions


In designing digital circuits, the designer often begins with a truth table describing what the circuit should do. The design task is largely to determine what type of circuit will perform the function described in the truth table. While some people seem to have a natural ability to look at a truth table and immediately envision the necessary logic gate or relay logic circuitry for the task, there are procedural techniques available for the rest of us. Here, Boolean algebra proves its utility in a most dramatic way. To illustrate this procedural method, we should begin with a realistic design problem. Suppose we were given the task of designing a flame detection circuit for a toxic waste incinerator. The intense heat of the fire is intended to neutralize the toxicity of the waste introduced into the incinerator. Such combustion-based techniques are commonly used to neutralize medical waste, which may be infected with deadly viruses or bacteria:

So long as a flame is maintained in the incinerator, it is safe to inject waste into it to be neutralized. If the flame were to be extinguished, however, it would be unsafe to continue to inject waste into the combustion chamber, as it would exit the exhaust un-neutralized, and pose a health threat to anyone in close proximity to the exhaust. What we need in this system is a sure way of detecting the presence of a flame, and permitting waste to be injected only if a flame is "proven" by the flame detection system. Several different flame-detection technologies exist: optical (detection of light), thermal (detection of high temperature), and electrical conduction (detection of ionized particles in the flame path), each one with its unique advantages and disadvantages. Suppose that due to the high degree of hazard involved with potentially passing un-neutralized waste out the exhaust of this incinerator, it is decided that the flame detection system be made redundant (multiple sensors), so that failure of a single sensor does not lead to an emission of toxins out the exhaust. Each

sensor comes equipped with a normally-open contact (open if no flame, closed if flame detected) which we will use to activate the inputs of a logic system:

Our task, now, is to design the circuitry of the logic system to open the waste valve if and only if there is good flame proven by the sensors. First, though, we must decide what the logical behavior of this control system should be. Do we want the valve to be opened if only one out of the three sensors detects flame? Probably not, because this would defeat the purpose of having multiple sensors. If any one of the sensors were to fail in such a way as to falsely indicate the presence of flame when there was none, a logic system based on the principle of "any one out of three sensors showing flame" would give the same output that a single-sensor system would with the same failure. A far better solution would be to design the system so that the valve is commanded to open if and only if all three sensors detect a good flame. This way, any single, failed sensor falsely showing flame could not keep the valve in the open position; rather, it would require all three sensors to be failed in the same manner -- a highly improbable scenario -for this dangerous condition to occur. Thus, our truth table would look like this:

It does not require much insight to realize that this functionality could be generated with a threeinput AND gate: the output of the circuit will be "high" if and only if input A AND input B AND input C are all "high:"

If using relay circuitry, we could create this AND function by wiring three relay contacts in series, or simply by wiring the three sensor contacts in series, so that the only way electrical power could be sent to open the waste valve is if all three sensors indicate flame:

While this design strategy maximizes safety, it makes the system very susceptible to sensor failures of the opposite kind. Suppose that one of the three sensors were to fail in such a way that it indicated no flame when there really was a good flame in the incinerator's combustion chamber. That single failure would shut off the waste valve unnecessarily, resulting in lost production time and wasted fuel (feeding a fire that wasn't being used to incinerate waste). It would be nice to have a logic system that allowed for this kind of failure without shutting the system down unnecessarily, yet still provide sensor redundancy so as to maintain safety in the event that any single sensor failed "high" (showing flame at all times, whether or not there was one to detect). A strategy that would meet both needs would be a "two out of three" sensor logic, whereby the waste valve is opened if at least two out of the three sensors show good flame. The truth table for such a system would look like this:

Here, it is not necessarily obvious what kind of logic circuit would satisfy the truth table. However, a simple method for designing such a circuit is found in a standard form of Boolean expression called the Sum-Of-Products, or SOP, form. As you might suspect, a Sum-Of-Products Boolean expression is literally a set of Boolean terms added (summed) together, each term being a multiplicative (product) combination of Boolean variables. An example of an SOP expression would be something like this: ABC + BC + DF, the sum of products "ABC," "BC," and "DF." Sum-Of-Products expressions are easy to generate from truth tables. All we have to do is examine the truth table for any rows where the output is "high" (1), and write a Boolean product term that would equal a value of 1 given those input conditions. For instance, in the fourth row down in the truth table for our two-out-of-three logic system, where A=0, B=1, and C=1, the product term would be A'BC, since that term would have a value of 1 if and only if A=0, B=1, and C=1:

Three other rows of the truth table have an output value of 1, so those rows also need Boolean product expressions to represent them:

Finally, we join these four Boolean product expressions together by addition, to create a single Boolean expression describing the truth table as a whole:

Now that we have a Boolean Sum-Of-Products expression for the truth table's function, we can easily design a logic gate or relay logic circuit based on that expression:

Unfortunately, both of these circuits are quite complex, and could benefit from simplification. Using Boolean algebra techniques, the expression may be significantly simplified:

As a result of the simplification, we can now build much simpler logic circuits performing the same function, in either gate or relay form:

Either one of these circuits will adequately perform the task of operating the incinerator waste valve based on a flame verification from two out of the three flame sensors. At minimum, this is what we need to have a safe incinerator system. We can, however, extend the functionality of the system by adding to it logic circuitry designed to detect if any one of the sensors does not agree with the other two. If all three sensors are operating properly, they should detect flame with equal accuracy. Thus, they should either all register "low" (000: no flame) or all register "high" (111: good flame). Any other output combination (001, 010, 011, 100, 101, or 110) constitutes a disagreement between sensors, and may therefore serve as an indicator of a potential sensor failure. If we added circuitry to detect any one of the six "sensor disagreement" conditions, we could use the output of that circuitry to activate an alarm. Whoever is monitoring the incinerator would then exercise judgment in either continuing to operate with a possible failed sensor (inputs: 011, 101, or 110), or shut the incinerator down to be absolutely safe. Also, if the incinerator is shut down (no flame), and one or more of the sensors still indicates flame (001, 010, 011, 100, 101, or 110) while the other(s) indicate(s) no flame, it will be known that a definite sensor problem exists. The first step in designing this "sensor disagreement" detection circuit is to write a truth table describing its behavior. Since we already have a truth table describing the output of the "good flame" logic circuit, we can simply add another output column to the table to represent the second circuit, and make a table representing the entire logic system:

While it is possible to generate a Sum-Of-Products expression for this new truth table column, it would require six terms, of three variables each! Such a Boolean expression would require many steps to simplify, with a large potential for making algebraic errors:

An alternative to generating a Sum-Of-Products expression to account for all the "high" (1) output conditions in the truth table is to generate a Product-Of-Sums, or POS, expression, to account for all the "low" (0) output conditions instead. Being that there are much fewer instances of a "low" output in the last truth table column, the resulting Product-Of-Sums expression should contain fewer terms. As its name suggests, a Product-Of-Sums expression is a set of added terms (sums), which are multiplied (product) together. An example of a POS expression would be (A + B)(C + D), the product of the sums "A + B" and "C + D". To begin, we identify which rows in the last truth table column have "low" (0) outputs, and write a Boolean sum term that would equal 0 for that row's input conditions. For instance, in the first row of the truth table, where A=0, B=0, and C=0, the sum term would be (A + B + C), since that term would have a value of 0 if and only if A=0, B=0, and C=0:

Only one other row in the last truth table column has a "low" (0) output, so all we need is one more sum term to complete our Product-Of-Sums expression. This last sum term represents a 0 output for an input condition of A=1, B=1 and C=1. Therefore, the term must be written as (A' + B'+ C'), because only the sum of the complemented input variables would equal 0 for that condition only:

The completed Product-Of-Sums expression, of course, is the multiplicative combination of these two sum terms:

Whereas a Sum-Of-Products expression could be implemented in the form of a set of AND gates with their outputs connecting to a single OR gate, a Product-Of-Sums expression can be implemented as a set of OR gates feeding into a single AND gate:

Correspondingly, whereas a Sum-Of-Products expression could be implemented as a parallel collection of series-connected relay contacts, a Product-Of-Sums expression can be implemented as a series collection of parallel-connected relay contacts:

The previous two circuits represent different versions of the "sensor disagreement" logic circuit only, not the "good flame" detection circuit(s). The entire logic system would be the combination of both "good flame" and "sensor disagreement" circuits, shown on the same diagram. Implemented in a Programmable Logic Controller (PLC), the entire logic system might resemble something like this:

As you can see, both the Sum-Of-Products and Products-Of-Sums standard Boolean forms are powerful tools when applied to truth tables. They allow us to derive a Boolean expression -- and ultimately, an actual logic circuit -- from nothing but a truth table, which is a written specification for what we want a logic circuit to do. To be able to go from a written specification to an actual circuit using simple, deterministic procedures means that it is possible to automate the design process for a digital circuit. In other words, a computer could be programmed to design a custom logic circuit from a truth table specification! The steps to take from a truth table

to the final circuit are so unambiguous and direct that it requires little, if any, creativity or other original thought to execute them.

REVIEW: Sum-Of-Products, or SOP, Boolean expressions may be generated from truth tables quite easily, by determining which rows of the table have an output of 1, writing one product term for each row, and finally summing all the product terms. This creates a Boolean expression representing the truth table as a whole. Sum-Of-Products expressions lend themselves well to implementation as a set of AND gates (products) feeding into a single OR gate (sum). Product-Of-Sums, or POS, Boolean expressions may also be generated from truth tables quite easily, by determining which rows of the table have an output of 0, writing one sum term for each row, and finally multiplying all the sum terms. This creates a Boolean expression representing the truth table as a whole. Product-Of-Sums expressions lend themselves well to implementation as a set of OR gates (sums) feeding into a single AND gate (product).

You might also like