You are on page 1of 245

Forward: Fundamentals of Switching Theory

Forward: Fundamentals of Switching Theory


My first experience with switching circuits was in 1952 as a student in Electrical Engineering at the University of Nebraska-Lincoln; a friend needed help in getting a remotecontrolled vehicle to function properly for an E-Week project. My first formal view of switching theory was through attending internal seminars held at Bell Telephone Laboratories in 1953. Although brief, my training at Bell Labs was also of some help in developing a special weapons trainer while in the Air Force, a Tic-Tac-Toe machine for another E-Week project in 1956, and for formal switching theory courses at Stanford. One of the duties associated with my assistantship at Stanford was to develop logic diagrams for a computer donated to Stanford by IBM. I began teaching switching theory in the early 1960's, at first from Caldwell's excellent text designed around relays, and later from quite a large number of texts as they became available. It was frustrating to find that new books frequently failed to cover some of the material as well as it had been covered in earlier texts. Also, no single text seemed to cover all of the more important aspects of switching theory. At one point, four texts were being used for a single course. Finally a single text was used, but heavily assisted by supplemental notes. There was one text that was excellent for combinational circuit theory, but not very good for sequential circuits. Another text was quite good for sequential circuits, but the approach to combinational circuits was unsatisfactory. I decided to use the book with excellent combinational circuit theory, since I had supplemental notes for sequential circuits. Also, the sequential theory in the other book still required substantial supplemental notes, and I think that it is psychologically advantageous to start the course with good textual material. The decision to put the notes into a form for a text was made when that particular book went out of print and none of the available texts served as a satisfactory substitute. As it turned out, supplemental notes were far easier to write than a text. For much of the material, others had written voluminously and well, and there was little motivation to rewrite it. However, there were several items which had never been covered well, and at times that seemed to justify the effort. This text is intended to provide the basic tools for switching theory and logic circuit design. Although it is written primarily for use in an undergraduate course in Electrical Engineering, the course has been taken quite satisfactorily by students in Computer Science, Mathematics and other engineering disciplines. The emphasis in this book is on the theory (Boolean Algebra and the algorithms that are needed to manipulate it) that underlies the design of combinational and sequential circuits. However, it is very important that the student has at least six experiments in a supervised laboratory environment to become familiar with discrete switching components and to develop a feeling for signals that vary with time. The experiments should be performed with discrete component "chips" and, although sophisticated hardware is not required, it is desirable to use the best equipment available. At a minimum, the experiments should include the following. 1. (While covering Chapter 3) Examination of two (2) input and gates with respect to the Table of Combinations; Development of signals that vary with time; And

Forward: Fundamentals of Switching Theory

gates and or gates with the time signal Table of Combinations (See Figure 3.3 in text). 2. (While covering Chapter 3) Verification of the postulates of Boolean Algebra: a. with and gates and or gates b. with analog signal switching devices 3. (After covering Chapter 5) Encoding and Decoding: a. an 8-to-3 encoder b. a 3-to-8 decoder c. a 4-bit to 7 segment decoder 4. (After Experiment 3) Multiplexers: a. analog b. digital c. as a combinational circuit 5. Build and test a single-rail R-S flip-flop. (This is in conjunction with Problem 7.1.2.) 6. Build and test the circuit for Problem 7.5. The experiments should be scheduled to reinforce the lectures. This creates some problems with Experiment No. 5 above. To overcome this, it is desirable to work Problem 7.1.2 in the lecture from word problems, through state assignment to final design. The sequence of the material in the book is broken by this lecture. However, Problem 7.1.2 is a simple one, and there is no need to dwell on the state minimization or state assignment processes. Viewed as an opportunity, the lecture can act as a motivational lecture for the material that follows. The emphasis everywhere is on fundamental concepts. If the student understands the basic processes thoroughly, he/she will be able to grasp new material more readily and to put it in proper perspective. When I first began teaching switching theory, the concepts from set theory were not generally utilized. The introduction of these concepts made a big difference in "teaching efficiency." Their inclusion in this text for students at this level should make a difference not only to switching theory, but to other courses as well. In some curricula, this material may be presented in earlier courses, in which case it can be used as a review. However, it is so important in helping a student to develop his ability to work with abstractions and to extend his thought processes, that it has been included here as an integral part of the text. I have found the process of enumeration to be invaluable in solving large and particularly difficult problems, and so I have placed emphasis on that subject as well. In Chapter 4, considerable effort is made to help the student see the relationships that exist between the two principal ways of formulating functions. This is a long chapter, and in many ways it reads more like a dictionary than a text book. However, there are a large number of new definitions and this seemed to be a reasonable way of presenting them. In all chapters, problems are introduced within the textual material, and the student should seriously consider working all of the problems before moving on. Because of the large number of new concepts and definitions, it is probably more important in Chapter 4 than in any other chapter. Minimization techniques have experienced a wide range of popularity as technology has changed. I feel that the increased understanding of switching circuits and the underlying algebraic manipulations, together with the confidence this understanding builds in the student, far outweigh the concern of how much the techniques might be used in the field. It

ii

Forward: Fundamentals of Switching Theory

has been my experience that almost all difficulties that students have lay in their basic understanding of the processes involved. This material is a significant help to them in this respect. Chapter 6 introduces nand gates, nor gates, xor gates, and discusses very briefly some of the general chips available for design of combinational circuits. The emphasis here again is on the fundamental operation, but with some additional conceptual aids to help in transforming one type of circuit to another. The organization of the material on sequential circuits is substantially different from most texts. Most authors apparently feel obligated to discuss flip-flops and the analysis of sequential circuits before teaching design. The frustrations of teaching analysis first are many for both teacher and student. I decided to teach design first. The results were amazing. Students moved through the material without the frustration noted in previous years, and everything went much more smoothly in general. One reason for this, perhaps the major one, is that the material can be organized so that one basic concept can be taught at a time, thus ensuring that the student has a good grasp of the fundamentals before proceeding to the next concept. One word of caution: The teacher must really make certain the students understand the relationship of the transition table to the operation of a sequential circuit. Over the years, I continue to find that the difficulties that most students have at the state assignment and final design stages are a result of not fully understanding this relationship. Some very trivial problems have been added to Chapter 7 specifically to attack this problem. The minimization of transition tables is handled by both the Huffman-Mealy and the compatible pairs methods. The introduction of a "Conflict Resolution Operator" to standard set operations appears to be of considerable conceptual advantage in teaching the material. It also provides a clean, straightforward technique for setting up and splitting output class sets in the Huffman-Mealy procedure, regardless of don't cares in the output section. State assignment has been handled in a particularly simple way. It is consistent with the attempt to emphasize fundamental concepts, which was the principal motivation for the development of this text. Finally, the organization of the design of sequential circuits is presented in a simple form that is usable for all types of design, including clock-controlled flip-flops. Although there is occasional mention of more sophisticated techniques, the more complex schemes have been avoided in order that the student can concentrate on fundamentals. The use of MSI and VLSI circuits in system design are thus postponed here for future courses. Also, the technological constraints have not been discussed or even alluded to. This is partially because such material is better presented after students have had their basic electronics courses, and partially because I wanted to allow students the opportunity to concentrate on the theory in an uncomplicated way. The student should be made aware of the interface problems at some point in the course, although a formal approach is not necessary at this level. There are many things that could have been included in this book. Foremost in my opinion are problems that demonstrate the relationship of switching theory to good computer program design. On several occasions I have had difficulty with "nasty" programs, and ended up resolving the problems by using sequential circuit design techniques. There are also close ties to philosophy (logic) and probability theory that have been brought out in the appendices to some degree. But there is barely enough time to cover the
iii

Forward: Fundamentals of Switching Theory

material presented in one semester and there is a point of diminishing returns in tying things together too, especially if it detracts from the students' concentration on the material at hand. The appendices have been included simply because they contain a tie to the theory which is not likely to be found in other texts. Some teachers will feel that number systems should have been included in the text. Outside the use of binary and decimal numbers to work with canonical product and sum terms, switching theory has no need for the number systems. Of course, switching circuit design has been motivated by the need for it in computer design. I feel, however, that its place is in those courses dealing with computer design - introducing the material at a time when it will actually be used. Appendix 1 has been added, however, for those who wish to incorporate that material into the course.

iv

References: Fundamentals of Switching Theory

References
The following is a partial list of books from which this author has taught. Included after each book is the first paragraph of the authors preface. The books are listed in chronological sequence. As time progresses, there is a tremendous increase in sophistication, new needs, new languages and new directions, but the fundamentals are still there and presented well. -William Keister, Alistair E. Ritchie, Seth H Washburn; Members of the Technical Staff, Bell Telephone Laboratories, The Design of Switching Circuits, 1951, D.Van Nostrand Company, Inc., Toronto, New York, London. The present is an excellent time for this book to appear. In the past, general interest in switching art and knowledge of its unique techniques were limited to a few quarters where complex control mechanisms such as telephone switching systems were developed and used. Now, however, the situation is changing rapidly and radically. College professors, research engineers, scientists and mathematicians are now aware of and keenly interested in the subject. Bibliography: A list of the Bell Telephone Laboratory Series (there are some classic books here by reputed authors). -Samuel H. Caldwell, Professor of Electrical Engineering, Massachusetts Institute of Technology, Switching Circuits and Logic Design, 1958, John Wiley & Sons, Inc. Many of the worlds switching circuits were designed and built without the aid of any appreciable body of switching-circuit theory. The men who designed them, developed and organized their experience into creative skills of high order without benefit of theory. Why, then is it important to have a theoretical basis for designing switching circuits? What are these circuits and what do they do? Bibliography: None, but many references are embedded, including Boole, Gray, Hamming, Huffman, Karnaugh, McCluskey, Petrick, Quine, Ritchie, Shannon, Unger, Venn, and many others. -Montgomery Phister, Jr., Director of Engineering, Thompson-Ramo-Wooldridge Products, Inc., Los Angeles, California, Logical Design of Digital Computers, 1958, John Wiley & Sons, Inc. Of the many important developments in electrical engineering during the past decade, perhaps none has caught hold of the public and scientific imagination more than the electronic digital computer. The application of these machines to military and scientific computations, to the accounting problems of business, and more recently to the control of industrial tools and processes has shown how effectively engineers have applied their imaginations.

References: Fundamentals of Switching Theory

Bibliography: Excellent: by category, at the end of each chapter. -Watts S. Humphrey, Jr., Manager, Computer Advanced Development Data Processing Laboratory, Sylvania Electric Products, Inc., Instructor, Northeastern University. The need for a text which applies switching-circuit techniques to computer design problems became apparent to the author while teaching a graduate course on the subject at Northeastern University. This book is intended both as a graduate engineering text and as an aid to the practicing design engineer. Bibliography: Short, but quite good (28 entries). -Donald D. Givone, Faculty of Engineering and Applied Sciences, State University of New York at Buffalo, Introduction to Switching Circuit Theory, 1970, McGraw-Hill, Inc. Since the advent of the digital computer, there have appeared on the local scene the 0-11 boys and their logic systems. Switching circuit theory is the underlying principle behind these systems. To some, switching circuit theory is a mathematical discipline for the formalization of the characteristics of logic systems, while to others, it is a tool for the effective design of future systems. Thus, the mathematician, the physical scientist and the engineer have take an interest in this area Bibliography: Excellent References (88) and Related Readings (133). Authors Note: I liked Givones approach to Combinational Circuits better than the rest. Also, his approach to iterative circuits was excellent and was the first I had seen. I called him to see if we might collaborate on sequential circuits. However, he had no interest in authoring any more text books. --Wakerly, John F., Cisco Systems, Inc., Stanford University, Digital Design, Principles and Practices, 1990, Prentice Hall, Moores Law, which observes that semiconductor technology advances exponentially, has been valid for over three decades. Experts predict it will continue to hold for at least one more. When integrated circuits were introduced, logic packages had a dozen or so transistors. Today with exponential increases in circuit density, microprocessor chips have passed the 10-million-transistor mark. In less than another decade they will reach 100 million transistors per chip. Bibliography: Excellent very selective at the end of each chapter. This is perhaps the way of the future.

vi

Table of Contents: Fundamentals of Switching Theory

Table of Contents: Fundamentals of Switching Theory Forward: Fundamentals of Switching Theory ........................................................................... i References..................................................................................................................................v 1. A Bit of the History of Computing.....................................................................................1 2. Basic Concepts for Use with Switching Theory.................................................................6 2.1. Introduction ................................................................................................................ 6 2.2. Basis Concepts from Set Theory................................................................................ 6 2.2.1. Set Operations ....................................................................................................7 2.3. Set Equality ................................................................................................................ 8 2.3.1. Sets And Binary Relations ..................................................................................9 2.4. Functions .................................................................................................................. 10 2.5. Properties of Operators ............................................................................................ 12 2.6. Mathematical Systems ............................................................................................. 14 2.7. Enumeration ............................................................................................................. 14 2.8. Size of Ordered and Unordered Sets........................................................................ 15 2.8.1. Sets of Ordered Sets .........................................................................................15 2.9. Permutations............................................................................................................. 15 2.10. Unordered Subsets ................................................................................................... 16 2.11. Additional Problems For Chapter 2 ......................................................................... 18 3. Boolean Algebra: A Basis for Switching Theory............................................................20 3.1. Introduction and Orientation .................................................................................... 20 3.2. Logic Diagrams........................................................................................................ 20 3.3. Binary Valued Signals.............................................................................................. 20 3.4. Relays and Analog Signal Switches......................................................................... 22 3.5. Concepts of Complementation................................................................................. 23 3.6. Boolean Algebra....................................................................................................... 24 3.7. Some Boolean Algebra Theorems ........................................................................... 27 3.8. Additional Problems for Chapter 3 .......................................................................... 30 4. Switching Theory for Combinational Circuits .................................................................33 4.1. Introduction .............................................................................................................. 33 4.2. Definitions of Switching Functions and Expressions .............................................. 33 4.3. Rules of Assembly ................................................................................................... 34 4.4. Establishment of a Hierarchy for Operators............................................................. 35 4.5. Forms for Expressing Functions .............................................................................. 36 4.6. DeMorgan's Law Revisited ...................................................................................... 38 4.7. Converting To Canonical Form ............................................................................... 39 4.8. Decimal Notation ..................................................................................................... 40 4.9. Decimal Notation - Canonical Product Terms ......................................................... 41 4.10. Decimal Notation - Canonical Sum Terms .............................................................. 42 4.11. Karnaugh Maps ........................................................................................................ 45 4.12. N-Cubes.................................................................................................................... 49 4.13. Tie Sets and Cut Sets................................................................................................ 50 4.14. Some Aspects of Circuit Design .............................................................................. 52 4.15. Specifying Incomplete Functions............................................................................. 53 4.16. Encoders................................................................................................................... 55
vii

Table of Contents: Fundamentals of Switching Theory

4.17. Decoders................................................................................................................... 55 4.18. Additional Problems for Chapter 4 .......................................................................... 56 5. Minimization of Combinational Circuits..........................................................................62 5.1. Basic Minimization Processes.................................................................................. 63 5.2. Finding the Primes from Karnaugh Maps................................................................ 64 5.2.1. Column Removal ..............................................................................................67 5.2.2. Row Removal....................................................................................................68 5.3. Essential Primes on Karnaugh Maps........................................................................ 69 5.4. Most Economical Coverage on Karnaugh Maps ..................................................... 69 5.5. Minimization of Incompletely Specified Functions................................................. 70 5.6. Minimization of Conjunctive Forms........................................................................ 70 5.7. Minimization Using Tabular Techniques................................................................. 70 5.8. The Quine-McCluskey Procedure for Disjunctive Forms ....................................... 72 5.9. Minimization of Conjunctive Normal Forms........................................................... 74 5.10. Multiple Function Minimization .............................................................................. 74 5.11. The Tagged Quine-Mccluskey Procedure................................................................ 74 5.12. Hazards With Combinational Circuits ..................................................................... 83 5.13. Additional Problems for Chapter 5 .......................................................................... 86 6. Nand, Nor, Xor, etc., Combinational Circuits..................................................................99 6.1. Nand Operations....................................................................................................... 99 6.2. Non-Negotiable Parentheses .................................................................................. 101 6.3. An Alternate Algorithm ......................................................................................... 102 6.4. Conversion From Nand To +, , Not ...................................................................... 103 6.5. Nor Operators......................................................................................................... 104 6.5.1. The Nor Conversion Theorem ........................................................................105 6.5.2. Conversion Of Nor To +, , Not.....................................................................106 6.5.3. Conversion Between Nand and Nor ...............................................................106 6.5.4. An Alternate Nor Conversion Algorithm........................................................107 6.6. Xor Operations ....................................................................................................... 107 6.7. The Multiplexer as a Combinational Circuit Device ............................................. 108 6.8. ROMS, PROMS, PALS, PLAS ............................................................................. 109 6.8.1. ROMS .............................................................................................................109 6.8.2. PROMS ...........................................................................................................109 6.8.3. EPROMS ........................................................................................................110 6.8.4. PAL .................................................................................................................110 6.8.5. PLA ................................................................................................................110 6.9. Additional Problems for Chapter 6 ........................................................................ 110 7. Introduction to Sequential Circuits.................................................................................113 7.1. Introduction ............................................................................................................ 113 7.2. States, Graphs and Transition Tables..................................................................... 113 7.2.1. Concept of a State in Sequential Circuits ......................................................113 7.2.2. Concept of a State Graph ...............................................................................114 7.2.3. Concept of a Transition Table........................................................................115 7.3. Steps Involved in the Design of Sequential Circuits.............................................. 116 7.3.1. Word Problem to State Graph and Transition Table .....................................116 7.3.2. Transition Table to Reduced Transition Table ..............................................116

viii

Table of Contents: Fundamentals of Switching Theory

7.3.3. Selection of Memory Devices and State Assignment ....................................117 7.3.4. Design of Combinational Circuits to Set the Memory Elements....................117 7.3.5. Final Analysis and Hazard Elimination.........................................................117 7.4. Types of Sequential Circuits and Their State Graphs ............................................ 117 7.4.1. Fundamental Mode ........................................................................................119 7.4.1.1. An Example of Fundamental Mode - Sample Problem No. 1....................119 7.4.2. Pulse Mode Circuits .......................................................................................122 7.4.3. Synchronized Circuits with Level Inputs ........................................................123 7.4.4. Pulse Circuits Without Levels Input ...............................................................124 7.4.4.1. Circuits with a Single Pulse Input (no levels), or with Multiple Pulse Inputs (possibly with levels)..................................................................................124 7.4.4.2. An Example of Pulse Mode - Sample Problem No. 2 ..............................125 7.5. Types of Sequential Circuits and Their Transition Tables..................................... 126 7.5.1. Fundamental Mode ........................................................................................127 7.5.1.1. Fundamental Mode Transition Tables ........................................................127 7.5.1.1.1. Outputs as a Function of Internal State Only ......................................127 7.5.1.1.2. Outputs as a Function of Total State....................................................127 7.5.1.2. An Example of Fundamental Mode - Sample Problem No. 1..................128 7.5.2. Pulse Mode .....................................................................................................129 7.6. Transition Tables Viewed Dynamically................................................................. 131 7.6.1. Fundamental Mode ........................................................................................131 7.6.2. Pulse Mode .....................................................................................................132 7.7. Additional Problems for Chapter 7 ........................................................................ 133 8. Minimization of Transition Tables.................................................................................137 8.1. Introduction ............................................................................................................ 137 8.2. The Conflict Resolution Operator and Algorithm ................................................. 138 8.3. Set Division By Exclusion ..................................................................................... 138 8.3.1. Conflict Sets and Exclusion Sets ....................................................................139 8.3.2. Conflict Resolution .........................................................................................139 8.3.3. Conflict Resolution Algorithm........................................................................140 8.4. Huffman-Mealy State Minimization ...................................................................... 141 8.4.1. Huffman-Mealy Method and Sample Problem ...............................................143 8.5. The Compatible Pairs Method ............................................................................... 147 8.5.1. Establishing Conflicting Output States ..........................................................147 8.5.2. Determining Pair-wise Closure Constraints..................................................148 8.5.3. Determination of Incompatibility Based on Closure Constraints ..................148 8.5.4. Determination Of Maximal Compatible Sets .................................................149 8.6. The Grasselli-Luccio Method ................................................................................ 150 8.7. The Search for Prime C-Sets.................................................................................. 151 8.7.1. Selection of C-Sets..........................................................................................155 8.8. Additional Problems for Chapter 8 ........................................................................ 158 9. State Assignment For Sequential Circuits......................................................................167 9.1. Rules for State Assignment.................................................................................... 167 9.2. A Simple Method For Simple Circuits................................................................... 168 9.2.1. Effects of "Must" Entries (Fundamental Mode Only) ....................................171 9.3. State Assignment Strategies for Pulse Mode ......................................................... 173

ix

Table of Contents: Fundamentals of Switching Theory

9.3.1. Exceptions to the Rules ..................................................................................173 9.3.2. Another Example ............................................................................................173 9.4. Concluding Remarks.............................................................................................. 176 10. Sequential Circuit Design...............................................................................................187 10.1. Memory Elements .................................................................................................. 187 10.2. Fundamental Mode................................................................................................. 187 10.2.1. Pulse Mode with Levels Input and One Synchronizing Pulse ........................189 10.2.2. Multiple Pulse Circuits ..................................................................................190 10.2.3. Memory Element Design Characteristics ......................................................191 10.2.4. The R-S Flip-Flop ..........................................................................................192 10.2.5. J-K Flip-Flops ................................................................................................192 10.2.6. D Elements and D Flip-Flops ........................................................................193 10.2.7. T Flip-Flops ...................................................................................................193 10.2.8. General Comments on Flip-Flops ..................................................................193 10.2.9. Design Procedures .........................................................................................193 10.2.10. Fundamental Mode ........................................................................................194 10.2.11. Pulse Mode with Levels Input and One Synchronizing Pulse ........................197 10.2.12. Multi Pulse Design .........................................................................................199 10.2.13. Analysis of Sequential Circuits ......................................................................203 10.2.14. Essential Hazards in Fundamental Mode ......................................................203 10.3. Additional Sequential Circuit Design Problems .................................................... 204 10.4. Concluding Remarks.............................................................................................. 205 10.5. Additional Problems for Chapter 10 ...................................................................... 206 11. Design of Iterative Circuits ............................................................................................215 11.1. Introduction ............................................................................................................ 215 11.2. Parity Checking Circuits ........................................................................................ 216 11.2.1. Parity Checking - One Input/Cell ..................................................................216 11.2.2. Parity Checking - Two Inputs/Cell .................................................................217 11.2.3. N Out Of M Circuits .......................................................................................218 11.2.4. N Adjacent Inputs High .................................................................................218 A. Appendices .....................................................................................................................220 A1. Appendix I. Number Systems.......................................................................................221 A2. Appendix II. Relationships between Mathematical Logic and Switching Theory.........227 A3. Appendix III. Relationships between Probability and Switching Theory................229

Table of Figures: Fundamentals of Switching Theory

Table of Figures: Fundamentals of Switching Theory Figure 2.1. Venn Diagrams Showing Complement, Union and Intersection .......................... 8 Figure 2.2. Venn Diagrams for Three Variables...................................................................... 9 Figure 2.3. Pictorial Representation of Set Terms................................................................. 12 Figure 3.1. The Basic Or Circuit: z = x + y .......................................................................... 21 Figure 3.2. The Basic And Circuit: z = x y ......................................................................... 21 Figure 3.3. Electrical Signals and Their Or and And Equivalence ........................................ 22 Figure 3.4. Relay Circuits with "Attached Contacts" ........................................................... 22 Figure 3.5. Or and And Relay Schematics............................................................................. 23 Figure 3.6. Three Representations of x, x , 0 and 1.............................................................. 23 Figure 3.7. Distributive Law P2(a) with Venn Diagrams...................................................... 25 Figure 3.8. Distributive Law P2a with Relay Circuits........................................................... 25 Figure 3.9. Proof of P2(a) by Table of Combinations ........................................................... 26 Figure 3.10. Distributive Law P2(a) with Digital Signals ..................................................... 26 Figure 3.11. Consequences of Delays in Signal Inversion .................................................... 27 Figure 3.12. Venn Diagrams for (a b) and (a + b) ............................................................... 28 Figure 4.1. Algebraic Manipulation with F = Function (2 or more variables) ...................... 40 Figure 4.2. Decimal Manipulation: F = Function (2 or more variables).............................. 44 Figure 4.3. Karnaugh Maps for up to Six Variables with Variable Legends......................... 46 Figure 4.4. Karnaugh Maps for up to Six Variables with Cells Numbered........................... 47 Figure 4.5. 3-Cube ................................................................................................................. 49 Figure 4.6. Methods of Coding {a,b,c,d}............................................................................... 49 Figure 4.7. Analog Switching Circuits .................................................................................. 51 Figure 4.8. Figure 4.7a Drawn as a Graph with its Topological Inverse............................... 52 Figure 5.1. Focus on 1-Cells .................................................................................................. 64 Figure 5.2. Some Patterns of Primes of Four Variables ........................................................ 65 Figure 5.3. Multiple Output Example - Criterion 3 ............................................................... 78 Figure 5.4. Multiple Output Example Realization.............................................................. 79 Figure 5.5. The Hazard: Effect of Delays in Digital Signals ................................................ 83 Figure 5.7. f(x,y,z) = xz + y z ................................................................................................ 84 Figure 6.1. Representations of the Nand Operation............................................................... 99 Figure 6.2. Nand Circuits..................................................................................................... 100 Figure 6.3. Representations of the Nor Operation ............................................................... 104 Figure 6.4. Nor Circuits ....................................................................................................... 105 Figure 6.5. Representation of the xor Operation.................................................................. 107 Figure 6.6. Table of Combinations for Odd Indexes; Map of Odd Indexes ........................ 108 Figure 6.7. Electro-Mechanical and Electrical Switching Multiplexers.............................. 109 Figure 7.1. Partial State Graph............................................................................................. 114 Figure 7.2. Partial Transition Table ..................................................................................... 115 Figure 7.3. Sequential Movement to New Stable States...................................................... 116 Figure 7.4. State Graph for Fundamental Mode .................................................................. 119 Figure 7.5. First State Development .................................................................................... 120 Figure 7.6. Fully Developed State Graph: Sample Problem #1.......................................... 121 Figure 7.7. State Graph for Synchronized Circuit With Levels Input ................................. 124 Figure 7.8. Possible State Graph for Pulse-Driven Circuits ................................................ 125

xi

Table of Figures: Fundamentals of Switching Theory

Figure 7.9. State Graph for Sample Problem No. 2 ............................................................. 125 Figure 7.10. Transition Table for Fundamental Model (Ouputs as a Function of Internal State Only) ................................................. 127 Figure 7.11. Fundamental Mode With Outputs a Function of Total State .......................... 128 Figure 7.12. Transition Table for Sample Problem No.1, Case a........................................ 128 Figure 7.13. Transition Table for Sample Problem No. 1, Case b....................................... 129 Figure 7.14. Transition Table Corresponding to Figure 7.7 ................................................ 130 Figure 7.15. Transition Table Corresponding to Figure 7.8 ................................................ 131 Figure 7.16. Transition Table for Sample Problem No. 2 ................................................... 131 Figure 8.1. Transition Table for Sample Problem No. 1 in Chapter 7................................. 143 Figure 8.2. State Coverage Table for Sample Problem No. 1.............................................. 145 Figure 8.3. Final Transition Table for Sample Problem No. 1 ............................................ 145 Figure 8.4. Final State Graph for Sample Problem No. 1.................................................... 145 Figure 8.5. Transition Tables for Problems 8.2 and 8.3 ...................................................... 146 Figure 8.6. Transition Table for Sample Problem No. 1 in Chapter 7................................. 147 Figure 8.7. Ouput Conflicts and Closure Constraints .......................................................... 148 Figure 8.8. Developing M-Sets from Completed Closure Constraints................................ 149 Figure 8.9. Sample Problem for Grasselli-Luccio Analysis ................................................ 152 Figure 8.10. Grasselli-Luccio Search for Prime c-sets ........................................................ 154 Figure 8.11. Coverage Table for Prime C-Sets.................................................................... 155 Figure 8.12. Minimum C-Set Determination....................................................................... 157 Figure 8.13. Final Transition Table for Grasselli-Luccio Example..................................... 158 Figure 8.14. Expansion of Stripped-Down Tables............................................................... 159 Figure 8.15. Stripped-Down Transition Tables for Problems 8.7 and 8.8........................... 159 Figure 8.16. Stripped-Down Transition Tables for Problems 8.9 and 8.10......................... 160 Figure 8.18. Stripped-Down Transition Tables for Problems 8.15...................................... 162 Figure 8.19. Stripped-Down Transition Tables for Problem 8.16 ....................................... 163 Figure 9.1. Transition Table for State Assignment.............................................................. 168 Figure 9.2. State Assignment Voting on Options ................................................................ 170 Figure 9.3. Transition Table with States Assigned .............................................................. 173 Figure 9.4. Transition Table for State Assignment, Example No. 2.................................... 174 Figure 9.5. State Assignment Voting on Adjacencies ......................................................... 175 Figure 9.6. All Possible Combinations on Assignments for Input State 00 ........................ 176 Figure 9.7. Transition Table with State Assignments, Example No. 2................................ 176 Figure 10.1. Fundamental Mode Memory Elements ........................................................... 189 Figure 10.2. Pulse Mode Memory Elements with Level Inputs. ......................................... 190 Figure 10.3. D and J-K Flip-Flops With Multiple Pulse Inputs........................................... 191 Figure 10.4. Recommended Layout, Design Truth Tables.................................................. 194 Figure 10.5. State Assigned Transition Table From Figure 8.3........................................... 195 Figure 10.6. Design Truth Table, Fundamental Mode......................................................... 195 Figure 10.7. Hazard-Free Design, Fundamental Mode........................................................ 196 Figure 10.8. State Assigned Transition Table From Figure 7.16......................................... 197 Figure 10.9. Pulse Mode Design Truth Table - One Synchronizing Pulse.......................... 197 Figure 10.10. J-K Circuit Design With Synchronizing Pulse.............................................. 198 Figure 10.11. Multi-pulse Transition Table......................................................................... 200 Figure 10.12. Initial Design Truth Table Including the Clock Pulse Lead.......................... 201

xii

Table of Figures: Fundamentals of Switching Theory

Figure 10.13. Final Design Truth Table After Corrections ................................................. 201 Figure 10.14. Karnaugh Maps for Multi-pulse Design........................................................ 202 Figure 10.15. Karnaugh Maps for Design of Level Inputs .................................................. 203 Figure 10.16. J-K Flip-Flops With Multi-Pulse Design ...................................................... 203 Figure 10.17. Transition Table for Fundamental Mode....................................................... 204 Figure 11.1. The General Form for an Iterative Circuit....................................................... 215 Figure 11.2. One-Input Parity Checking Iterative Circuit Design ....................................... 217 Figure 11.3. Two-Input Parity Checking Iterative Circuit Design ...................................... 217 Figure 11.4. Transition Tables for a 2-out-of-9 Iterative Circuit......................................... 218 Figure 11.5. Transition Tables for an Even Number of Adjacent Inputs High ................... 219

xiii

Chapter 1: A Bit of the History of Computing

1. A Bit of the History of Computing 1


The history of mechanical, electro-mechanical and electronic computers is a very interesting subject, filled with innovation, expectation, success and failure (and yes, politics). It was inspired by geniuses in many areas and fueled by the dedicated work of a vast number of scientists and engineers. The word "calculator" is derived from the Latin "calculus" for pebbles, implying a connection between the use of stones and calculations lost in antiquity. The abacus, an ingenious device still used in some parts of the world, was probably developed in primitive form over 5000 years ago. However, records of mechanical calculators that could add and subtract do not appear until the early 17th century. In 1623, Wilhelm Schickard designed and built a machine that would automatically add and subtract and partially multiply and divide. However, apparently this event was not well-publicized. Blaise Pascal, about 1644, developed a simple addition/subtraction machine, and Leibniz (almost 30 years later) extended it to automatically perform multiplication and division. However, the first commercially successful machine for multiplication, very similar to the Leibniz machine, was developed by Thomas de Colmar in 1820. He called it the Arithometre. Joseph Marie Jacquard invented a device to program looms in 1801. This device used a series of cards with punched holes to define the desired pattern. It was a marvelous invention that revolutionized the textile industry and served as an inspiration to those who later were key inventors of modern computers. Sometime between 1812 and 1822, frustrated by the errors in the printed mathematical tables, Charles Babbage conceived a plan for building a Differential Engine that would develop tables using difference methods. He built a simple working model for eight decimal places and up to two differences. However, his design for a 20-place machine working with sixth differences was apparently beyond the mechanical state of the art at that time. The machine was partially built but never fully constructed. A more modest, 14-place machine that worked with fourth differences was built at great expense by George Scheutz in Sweden and was exhibited in London in 1854. In 1833, after observing the Jacquard attachment to the loom, Babbage developed the idea of a general Analytical Engine that consisted of two parts: 1. The store in which all initial and intermediate quantities would be placed, and 2. The mill into which quantities were brought and operated upon. There were two different sets of cards, the first to direct the sequence of operations and the second to direct the variables on which the codes were required to operate. He visualized that many functions would need to be programmed only once and that the engine would possess its own library of such functions. Although his analytic engine never reached
1 Most of the historical information in this chapter was taken from the following:

Jerry Roedel, An Introduction to Analog Computers, November 1953, Pullman-Standard Car Manufacturing Co. William Keister, Alistaire E. Ritchie and Seth N. Washburn, The Design of Switching Circuits, D. Van Nostrand Co., Inc. New York, 1951. (The first text on switching theory.) Peter Calingaert, Principles of Computation, Addison-Wesley, Reading, Mass., 1965. Herman H. Goldstein, The Computer from Pascal to Van Neumann, Princeton University Press, l972. (In addition to being a scholarly work, this book furnishes a very interesting eye-witness view of the exciting ENIAC/EDVAC era.)

Chapter 1: A Bit of the History of Computing

operational status, its organization was of such significant importance that many refer to Babbage as the Father of modern computers. The years from 1833 to 1890 saw little more than technological improvements in mechanical devices. In 1850, the first key-driven calculator was patented. But, the first practical key-driven adding machine was developed by Dorr E. Felt in 1886. In 1887, the first calculating machine to have single multiplication operation was introduced by Leon Bollee. Felt patented his "Comptometer" in 1887 and introduced the first practical printing adding machine in 1889. Monroe and Marchant introduced calculating machines in 1911 and by 1920 electronic motor drives had been incorporated into their calculating machines. In 1880 when the returns of the tenth census were being tabulated, John Shaw Billings, Superintendent of the Census, spoke with Herman Hollerith and remarked that "there ought to be some mechanical way of doing this job, something on the principle of the Jacquard looms." After ascertaining that Billings had no desire to claim or use the idea, Holerith set about designing equipment to perform the task. The company that grew out of his work was the Tabulating Machine Company; eventually it became IBM. The electric relay was invented by Joseph Henry in about 1830, and by the turn of the century its use was extensive in telegraphy and telephony. In 1937 as a graduate student in physics at Harvard, Howard H. Aiken and George R. Stibitz at Bell Telephone Laboratories developed computer designs based on electromechanical relays. The Aiken computer (developed jointly by Harvard and IBM) was completed in 1944, and the Bell Labs computer (installed in 1946) enjoyed only brief popularity, as much faster electronic computers were being developed at the same time. In the late 1930's, John V. Atanasoff, who was on the faculty at Iowa State College, built an electronic machine to solve simultaneous linear equations. John W. Mauchly, then at Ursinus College, visited with Atanasoff and apparently was inspired by the potential of this electronic technology. Later, at the Moore School of Electrical Engineering at the University of Pennsylvania, he and another graduate student, J. Presper Eckert, developed the design for the first electronic computer, ENIAC (Electronic Numerical Integrator and Computer). It became operational in 1946 and was roughly 500 times faster than the relay machines. Originally, the ENIAC required a great deal of operator attention, as the program was run in steps with operators setting switches and controlling the program flow. Indeed, for computers being built up to this point in time, the operations were entered from paper tapes or cards. John von Neumann felt that operating instructions could be stored in the computer memory and modified as needed by the computer itself. In 1967, he designed a central control unit for the ENIAC, which proved highly successful. He was also instrumental in the design of the EDVAC (Electronic Discrete Variable Computer, also built at the Moore School) which, from an organizational point of view, became the forerunner of all modern computers (referred to as von Neumann machines). There were many delays and before the EDVAC became operational (1950), political problems developed at the University of Pennsylvania. Mauchly and Eckert set up their own company, later to become UNIVAC, and Neumann moved to the Institute of Advanced Study at Princeton where he led the development of the IAS (Institute for Advanced Studies) machine. Meanwhile in England, Maurice Wilkes led the development of the EDSAC computer (Electronic Delay Storage Automatic Computer), similar in architecture to the EDVAC, completing its development in 1949, ahead of the EDVAC.

Chapter 1: A Bit of the History of Computing

The IAS machine was used as a model for the development of a large number of computers. In the United States, this included the ADIVAC (Argonne Version of the Institute {IAS} Digital Automatic Computer) ORACLE (Oak Ridge Automatic Computer Logical Engine), GEORGE (Argonne National Laboratory), ORDVAC (Ordnance Variable Automatic Computer, Aberdeen Proving Grounds), Whirlwind I (Massachusetts Institute of Technology), ILLIAC (Illinois Automatic Computer, University of Illinois), JOHNNIAC (Johnny von Neumann Integrator and Automatic Computer, RAND Corporation), the IBM 700 and 7000 Series, and the Sperry Rand/UNIVAC 1100 Series. Other countries also developed computers based upon the IAS-type machine. These included the BESK (Binary Electronisk Sekvens Kalkylator, Sweden, 1953), PERM (Programmgesteurte Elektronische Rechenanlage Munchen, Germany, 1953), BESM (Bystrodeistwujuschtschaja Electronnajastschetnaja Machina, Russia, 1953) and DASK (Dansk BESK, Denmark, 1957). Until 1950, the computers developed were all government funded and used principally for defense-oriented research. In 1951, the first commercially available computers were introduced, the Ferranti Mark I in England and the UNIVAC (Universal Automatic Computer) in the United States. It must be pointed out that up to this point in time, the design of the internal circuits of the computers principally followed an intuitive approach as opposed to a formal theoretical approach. We are now in an era where computers are found in household appliances and children's toys. The use of logic and logic devices pervade our environment and our thinking processes to the point where it is difficult to imagine that logic was not always a formal part of design and general problem solving. The use of gates and valves in water and steam systems, the "fail safe" switching designs that were used in the early stages of railroad development and are now used to a very high degree in all transportation systems, have many aspects related to switching theory and logic design as we know it today. The foundation for switching theory was set with the articles published by George Boole in 1848 and 1854 in which he converted the concepts of logic into a formal mathematical form which today we refer to as Boolean Algebra. Although it was developed by Boole "to investigate the fundamental laws of those operations of the mind by which reasoning is performed," Claude E. Shannon - considered by many to be the father of information theory - showed that it could be applied to the operation and design of switching circuits (his Master's Thesis, MIT, 1938). This was really the beginning of switching theory. Since that time, there have been many extensions to the theory, making switching theory an important tool for all scientists and engineers. In 1938, the telephone central switching offices had already become very large and highly complex switching systems. They were designed more or less intuitively by very talented people. It is quite natural that the theory should be quickly utilized at Bell Telephone Laboratories. After World War II, a training program was established at the Laboratories. In 1950, a graduate course in switching theory was developed at MIT around the Bell Labs notes. After some revision, these notes became the first text in switching theory. Intuitive design continues to this day (and always will), but it is at a much higher level now since it rests on a solid foundation of formal mathematics. The expressed purpose of this book is to provide students with the theory that underlies the efficient and effective design of combinational and sequential circuits, and to

Chapter 1: A Bit of the History of Computing

develop sufficient understanding of the processes involved so that they will have confidence in the operation and reliability of their designs and know that they can be constructed with reasonable, if not minimal, cost. However, this is not the only goal. In the training process, students are presented problems from many disciplines. The nature of the problems a student will face after graduation will be from many disciplines. Therefore, it is important that, as students, they develop an overall ability to organize problems for realistic solutions, regardless of the source. It is not enough to gain a thorough understanding of the physical sciences, although that is essential. They must also develop abilities to think clearly and to understand logical connections and implications. Although the direct application to hardware development is obvious, it is a fact that nearly all computer software can be viewed as sequential machines, and the theory presented here is as important to the development of efficient programs and systems of programs as it is to the development of the hardware itself. The material in this book has been organized to provide a strong foundation in the underlying theory so that students will have clean and unambiguous concepts upon which to draw in subsequent courses. This text also emphasizes the development of logical and mental acuity to prepare them for the challenging problems that await. There are many areas of engineering and science that have a related body of mathematics that makes the solution of problems possible, even aesthetically pleasing. However, there are few areas in which the basic concepts come together as nicely as they do in switching theory. Chapter 2 of this book is intended as review of some basic concepts of set theory and enumeration. There are many concepts here that will be very useful in understanding the material that follows. Most students have been exposed to this material before, and a review is all that is required. The concepts of a function, its domain and range are perhaps of greatest importance conceptually, but the other material will also help to set the stage for better and more efficient communication. I have found that several intractable problems become easily solvable if the problem is organized properly. However, to organize requires an understanding of combinations and set sizes. Hence, the basic concepts of enumeration have been included. Chapter 3 is devoted to the introduction of Boolean Algebra and to how the postulates relate to commonly used switching elements. The treatment here has been first to provide a bit of motivation and then to develop the mathematics in such a way that students will have confidence in the applicability of algebra to the problems at hand. Chapter 4 expands upon the application of Boolean Algebra to switching functions, their properties and the manipulations commonly performed. Chapter 5 is devoted to the techniques for minimizing the cost of circuit realizations. This material has gone through cycles of popularity as technologies have changed. There may be some teachers who would prefer to omit this material, and it can be omitted without very seriously impacting the material in other chapters. The material is of considerable importance, however, since it provides an additional viewpoint of switching function manipulation and builds confidence in the students, since they know they can derive a minimal cost form for a variety of criteria. Chapter 6 relates the material now in hand to circuits using other than and, or, and not components. This chapter is also relatively independent of other chapters and is probably of greatest interest to those who will definitely be going into circuit design. However, its

Chapter 1: A Bit of the History of Computing

coverage will be of benefit through broadening the student's point of view, expanding their level of abstraction and, again, increasing their confidence level. The first six chapters will provide the student with an in-depth background in combinational circuit design. The next four chapters develop the theory and design of sequential circuits. It is in this area that this book departs from the approach generally used. Most texts approach sequential circuits by first teaching the analysis of sequential circuits. This book introduces design first. This approach has been found to be considerably less frustrating to student and teacher alike, since it can focus on concepts one at a time. Chapter 7 introduces the concept of sequential system states, state graphs and transition tables. In this chapter, students are taught to convert word problems into state graphs and to develop transition tables that become the principal focus for the remainder of the design procedures. Chapter 8 introduces the principal methods for finding minimal state equivalent transition tables. The Huffman-Mealy method is introduced first in an extended form to help secure the basic properties of the transition table and to develop an understanding for the constraints imposed by the various types of circuits. The compatible pairs method of Paull and Unger is then presented and, finally, a somewhat expanded version of the GrasselliLuccio method for selection of sets to produce a minimum state system is presented. Chapter 9 presents the basic requirements and goals of state assignment. A simple technique for state assignment is presented that works very well for small systems and emphasizes the principles involved so that reasonable decisions can be made when working with larger systems. Chapter 10 presents a simple technique for final circuit design with standard components. The presentation is more related to intuitive techniques than most of those found in modern texts. The chapter ends with a simple discussion of sequential circuit analysis (because it draws on the design procedure) and a brief discussion of essential hazards. Chapter 11 utilizes the tools developed for sequential circuits to design iterative combinational circuits. Several appendices have been included for those teachers who wish to vary the emphasis in their classes. Appendix I presents a brief set of rules and concepts with regard to number systems. Appendix II contains notes and rules for application of switching theory concepts to logic. Appendix III contains similar notes for application to probability theory.

Chapter 2: Basic Concepts for use with Switching Theory

2. Basic Concepts for Use with Switching Theory


2.1. Introduction A very important aspect of efficient thinking and efficient communication with others is to have symbols (such as words) that have a clear and precise meaning with a minimal chance of ambiguity. The efficiency of these processes is also affected by the simplicity of the concepts involved, or how "clean" they are in the sense of being free from exceptions or the context in which they are used. The evolution of mathematics and the evolution of technology have progressed together, with each motivating the other. Undoubtedly, there will be many innovations in the future that will simplify even further the concepts and processes outlined in this text. But for now, we can be very thankful for the many clean concepts that have developed through the mathematics of set theory and through the considerable efforts of those who have preceded us in the area of switching theory. Although many students will have had exposure to set theory before, it is so important to the development of a clear understanding of all that follows that some of the major concepts are presented here. Terms taken from set theory will be used freely in the development of switching theory, and it is important that the reader know their meanings precisely. This chapter is intended to cover or review those concepts and abstractions that are needed to understand the material that follows, and to ensure that we have a common basis for efficient communication of new ideas. 2.2. Basis Concepts from Set Theory A set is a collection of objects called elements. The collection is not ordered in any way. From time to time, references will be made to ordered sets; however, an ordered set is not a set in accordance with this definition, but may well be an element in a set of ordered objects. First, it is necessary to decide on all of the elements to be considered. This collection of elements is called the Universe, denoted U, and its elements may be either declared directly or indirectly. For example, let U be the first three letters of the alphabet, or U = {a,b,c}. There are also infinite sets that cannot be enumerated, such as the set of all integers or the set of all real numbers. Once a set has been defined, we can talk about subsets. For convenience, the mathematicians have two types of subsets. A subset of a set is referred to as any grouping of elements from that set, including the group itself. A proper subset of a set is a subset that does not contain all of the elements of that set. A very important set is the empty set, also called the null set. The empty set has no elements and is given the name . So = { } and is a subset of all sets. For example, let U = {a,b,c,d} and S = {a,b}, then S = {c,d}. We also have U = , = U, and S = S .

There is some symbolism that aids in writing information efficiently. The is used for "is an element of " and is used for "is not an element of." The symbol is used for "is a subset of" and the symbol is used for "is a proper subset of."

Chapter 2: Basic Concepts for use with Switching Theory

Recapping what has been said to this point: 1. A set is a collection of objects called elements of the set. 2. A set S is a subset of set T if and only if every element of S is an element of T. It is a proper subset if it has fewer elements than T. 3. The collection of all elements of concern is the Universe set, denoted U. 4. The set with no elements is the empty set or null set, . Problem 2.1. Verbalize and discuss the truth (or falseness) of the following: Given that S U and T U, then if a S and a T then T S. Answer: If S is a proper subset of U, and T is a proper subset of U, then T is a proper subset of S. The statement is: If a is an element of S and a is not an element of T, then T is a proper subset of S. This might be true, but it also might not be true. The proof is by contradicting example. Since S is a proper subset of U, it will not contain all of the elements of U. T is also a proper subset of U, but since no further information is given, it may well contain an element not in S. If it does, then T cannot be a subset of S. Note that a statement is TRUE only if it is always true; otherwise it is FALSE. Problem 2.2. Verbalize and discuss the truth or falseness of the following: a. If S X and T S, then T X. b. If a S and b S and T = {a,b}, then T S. c. If a S and T S, then a T. 2.2.1. Set Operations The complement of S was obtained by removing the elements of S from the Universe set and the remaining elements were denoted S . The bar above S can also be thought of as an operator which operates upon the set S in conjunction with the Universe set to produce the complement of S. Since the complement of any set is always with respect to the Universe set, it is not necessary to include it in the notations, and indeed, it is generally viewed as a Unary operator - that is, an operator that operates with a single operand. There are two other operations, called Union and Intersection. These are both binary operators (that is, they require two operands) that result in the formation of another set. The Union operator is given the symbol U . If X = S U T, then X will contain all of the elements found in either S or T. For example: Let S = {a,b} and T = {b,c} Then X = S U T = {a,b,c} The intersection operator is given the symbol I . If Y = S I T , then Y will contain those elements of S also found in T. These operations are shown through the use of a picture often referred to as Boolean Rings or Venn Diagrams.

Chapter 2: Basic Concepts for use with Switching Theory

S SU T SI T Figure 2.1. Venn Diagrams Showing Complement, Union and Intersection

In these diagrams, the box represents the Universe set. The circles represent sets from the Universe. In the first diagram, everything that is not in S but is in the Universe is S . In the second diagram, the area represented by all elements that are in either S or T are shaded to represent the union of S and T. In the third diagram, the intersection of S and T has been shaded to represent all elements that are in both S and T. Problem 2.3. Given U = {a,b,c,d,e,f,g}, X = {a,b,c}, Y = {d,e,f}, Z = {a,d,e,f}, find the following: a. X U Y b. X I Z c. X I Y I Z d. X U Y U Z 2.3. Set Equality Problems in set theory frequently require the determination as to whether or not sequences of operations will result in the same set. Two sets are equal if and only if the elements of each are exactly the same. The methods of proof include a one-to-one comparison of the elements in the two sets. When only two or three variables are involved, the Venn Diagrams can be used as a general method of proof. Algebraically, two sets A and B are equal if and only if each is a subset of the other (A B and B A) . Problem 2.4. Given a Universe with subsets X, Y and Z, what conditions would have to exist for X U Y = X U Z . Answer: The three-variable Venn Diagrams then become those shown in Figure 2.2. Upon examination, we see that these two can only be equal if the areas outside their intersection are actually empty. That is to say that Z outside X U Y and Y outside X U Z must have no elements. In other words: Z I (X U Y) = and Y I (X U Z) = . Note: When the intersection of two subsets is the null set, the sets are said to be disjoint or mutually exclusive.

Chapter 2: Basic Concepts for use with Switching Theory

XUY

XUZ Figure 2.2. Venn Diagrams for Three Variables

Problem 2.6. Show that: (X I Y ) U (Y I Z) U X I Z = (X I Y ) U X I Z For more than three variables, circles cannot be used and other diagrams called Veitch Diagrams or Karnaugh maps are used. They are used in essentially the same fashion but will be introduced later after more basic concepts have been introduced. It can be seen from the preceding problems that it would be nice to have a mathematics that would permit the manipulation of equations, such as factoring, multiplication of factors, etc. Indeed, this will come. But first, some more concepts. 2.3.1. Sets And Binary Relations Prior to this, set elements have had no specific properties and the concept of order has not arisen. There are many occasions when the ordering of elements is important. For example, consider an (x,y) coordinate system where (2,3) represents x = 2, y = 3 . Another example is the function y = sin(t), where coordinates of the function can be represented by a set of ordered pairs (t,y). The mathematicians have given us a very clean and precise way of looking at these things, and it begins with the concept of a set of ordered pairs. If there are two sets S and T, and the elements of S are denoted si and the elements of T are denoted ti, then if one element for S is selected as the first element, and one element from t is selected as the second element, we have an ordered pair (s,t). (Note: We will always use parentheses to indicate ordering. The curly brackets will indicate unordered sets, as they have been used to this point.) If there are N elements in S and M elements in T, then since each element in S can be matched with every element in T, there are N M possible pairs. The set which contains all of these pairs represents the largest possible set of ordered pairs "from S into T" and is called the Cartesian product, or the product set and is written S X T .
9

Problem 2.5. The following are important theorems in the study of logic and switching theory. Use Venn Diagrams to show that they are true. a. DeMorgan's Law: XUY =X IY; XIY =XUY b. Distributive Law : X U (Y I Z ) = (X U Y ) I (X U Z ) X I (Y U Z ) = (X I Y ) U (X I Z ) c. Absorption Law : X U (X I Y ) = X ; X I (X U Y ) = X (A U B) U C = A U (B U C ) d. Associative Law : (A I B) I C = A I (B I C )

Chapter 2: Basic Concepts for use with Switching Theory

Problem 2.7. Given S = {a, b} and T = {b, c, d} , form the Cartesian product. A Binary Relation (or simply Relation) is defined as a subset of the Cartesian Product. It can simply be enumerated but most generally is specified by naming the sets and providing a rule by which the set of ordered pairs can be generated. A symbolism frequently used to denote a relation is sRt, which means "s" stands in relation to "t" or "s relates to t." As an example, consider sets S and T, where: S={ 1,2,4,8} T = {3,5} The set of pairs from S and T defined by the rule s < t is the subset of S X T : {(1,3), (1,5), (2,3), (2,5), (4,5)} . Problem 2.8. Given S = { 1,2,3} and T = { 1,3,5} , generate the binary relations (subset of S X T) for the rules given: a. element s is numerically less than element t (s<t) b. element s is numerically less than or equal to element t (s t) c. element s is numerically greater than element t (s>t) d. element s is numerically equal to element t (s=t) We are frequently interested in relations where S and T are the same set, for example the set of all real numbers. We then say we have a relation "on S." There are several important properties that relations on S can be said to possess: Reflexive: A binary relation is reflexive if aRa is in the relation for all a in S. Symmetric: A binary relation is symmetric if for every aRb in the relation, bRa is also in the relation. Transitive: A binary relation is transitive if for all aRb and bRc in the relation, aRc is also in the relation. The relation described above, s t, is called the inclusion relation. If we consider it over all real numbers, we see that because of the equal sign, it is reflexive. It is not symmetric since (2,3) is in the relation but (3,2) is not. It is transitive since (2,3) and (3,5) being in the relation implies that (2,5) will also be in the relation. Problem 2.9. Given S = { 1,2,3} and that T = S, which properties are possessed by each of the relations in the previous problem. If S is not equal to T, then it is generally assumed that we have a relation on X = S U T . With S and T as defined in Problem 2.8, which properties are possessed by each of the relations. A very important relation is the equivalence relation. An equivalence relation on S is said to exist if, for every pair of elements in the relation, the two elements are equivalent. The equivalence relation generally carries with it the substitution property. That is to say, any expression containing the left-hand element may be restated, substituting the right-hand element for the left-hand element without changing the validity or value of the expression. In most algebraes, the equal sign (=) is used to imply the existence of an equivalence relation. For example, in standard algebra, if x = y + z , then x may be substituted for y + z in all equations in the system. The equivalence relation is reflexive, symmetric and transitive. 2.4. Functions A (single valued) function t = f(s) can be defined as a non-empty binary relation in which each element of S appears at most one time. Many basic concepts with regard to functions will be needed in this book, and there are many ways of viewing these concepts.
10

Chapter 2: Basic Concepts for use with Switching Theory

Probably the most familiar to students is the graphic presentation of a function. Most students can visualize the graph for t = sin(s). The function is the binary relation (the set of ordered pairs or coordinates); t = sin(s) is a formula that is the rule that allows us to pick from the infinite number of elements of (S X T), those which lie in the relation. We will retain this definition of a function: it is a non-empty set of ordered pairs; that the first element in the ordered pairs is from S, and the second element in the ordered pairs is from T; and that no element from S may appear more than once. Consider now the traditional "black box" with one input variable, S, and one output variable, T. If an input condition produces the same output whenever that input condition occurs, and if this is true for every input condition, then the output variable is a function of the input variable, and we may write t = f(s) or f : S T . The latter would be verbalized as "the function f which takes (or maps) elements of S into elements of T" or as "f which takes s into t." Frequently, in the design of digital circuit "black boxes" we will find that there are some values or elements of S for which we do not need to specify the output. Perhaps we cannot specify them, or perhaps it is not important what value the output has under those particular input conditions. In switching theory, these elements have come to be referred to as "don't care" elements which, although crude in terminology, describe the phenomenon quite well. Once the design process has been completed, the resultant design will provide a value in T for every element in S. The domain of a function consists of all the elements of S that occur in the function set (all of the elements that appear in the first position of the ordered pairs that form that relation). In the context of the black box mentioned above, the domain of the function when built will be all possible values taken on by the input variable. That is, the domain of the function is S. During the design phase, it may be impossible or at least undesirable to define the T values for some elements of S. In this event, we could say that the domain of the design function is a subset of S. However, we will not do that since eventually the domain of the function will be all of S. Instead, we will elect to say that the function is not welldefined, or that it is incompletely defined and use the term incomplete function to describe the function at this point. After the circuit has been designed, the circuit will perform as a complete function since every value s will have a corresponding value t. Therefore, for our "black box" the set S is the domain of the complete function. That is to say, the domain of the function is the set of elements for which values t exist (which may or may not exist during the design phase, but will eventually). Note: at least one (1) S must have a corresponding value t and no value S may be in more than one ordered pair. If every element of S enters into the relation, then the function is said to be complete. Otherwise, it is said to be incomplete. The set of all of the second elements in the set of ordered pairs is the range of the function, sometimes referred to as the image of S under f. If every element of T enters into the relation, the function is said to be onto. If every element of the range enters into the relation only once, the function is one-to-one and invertible. (The "invertible" means that given any element t, there is a unique element S, and that an inverse function exists.) A subset B of T may or may not be in the range of a function that is not onto T. Those elements of S which map into the subset B form a set called the inverse image of B under f (denoted f-1(B)). (This f-inverse of B is a subset of the domain and does not require that the function be invertible.)
11

Chapter 2: Basic Concepts for use with Switching Theory

The idea of onto comes from the concept of mapping. It is quite valid to visualize the point S as mapping into a point t, where t lies in a different coordinate system. Mappings can be one-to-one, one-to-many, many-to-one, and many-to-many. Only one-to-one mappings are invertible, but one-to-one and many-to-one mappings with each element of S appearing only once are mathematically equivalent to functions. The domain of the mapping may be a proper subset of S and the range may be a proper subset of T. Frequently, S is referred to as the departure set and T is referred to as the arrival set. Figure 2.3 gives a pictorial view of some of the terms introduced above. There are other terms that are used synonymously with those mentioned here and that are a bit more descriptive of the particular phenomena they are describing or the mathematical system in which they are used. For example, in studying transformations of S into T, S is called the domain and T is called the co-domain. In this text, the domain will always comprise the entire set S and the range will always comprise the entire set T. We will have functions during the design phase that could be thought of as sharing a domain which is a proper subset of S, but we will instead choose to say the function is an incomplete function. To specify a function requires the definition of a domain, a range and a formula or rule that permits the selection of the ordered pairs that make up the binary relation. The term formula does not appear to have a formal mathematical definition. Most of the time the implication is that if you have a formula, you can also determine the function. However, since there is not a formal definition to this effect, we may have a formula that carries significantly different meanings when applied to different sets. Indeed, a formula that does imply a function over one set can be entirely meaningless when applied to another set, even though the same basic mathematical system is under consideration.
B f t f -1 (B)
(inverse image of B under F)

s domain

range
(image of S under f)

Departure Set

Arrival Set

Figure 2.3. Pictorial Representation of Set Terms


Problem 2.10. Show examples of two complete functions that: a. have the same domain and range but are not equal b. have the same domain and formula but are not equal c. have the same formula and range but are not equal 2.5. Properties of Operators Students at this level are quite familiar with the symbols +, -, etc., and use them without giving them much thought. However, these symbols represent operations or manipulations that are performed on elements of sets to produce other elements. They have specific rules that must be obeyed if correct results are to be obtained.

12

Chapter 2: Basic Concepts for use with Switching Theory

One property of an operator is the number of operands it requires. If only one operand is required, it is said to be a unary operator. If two operands are required, it is said to be a binary operator. Unfortunately, a certain amount of sloppiness has been allowed to perpetuate, and some symbols can be both unary and binary depending on the way they are used. For example, - 2 - 4. The first minus sign would normally be interpreted as a unary operator, converting the real positive number 2 to a negative number. The second minus sign implies that a positive 4 is to be subtracted from the negative 2. It is also possible for the "-2" to be viewed overall as the "name" of the negative integer, so the minus sign takes on a strictly alphabetic meaning. There is a second subtle problem involved here and that is the concept referred to as the hierarchy of operations. Note the different answers obtained depending on the interpretation being (-2) - 4 or - (2 - 4). Students are generally taught to use parentheses to imply which operations must be performed first whenever an ambiguity might exist. However, to reduce the number of parentheses that have to be used, a hierarchy has evolved which is automatically applied when parentheses are absent. There is at least one more way to resolve possible ambiguities. Consider: 8 / 2 / 2 = (8 / 2) / 2 or 8 / (2 / 2). The ambiguity here can be resolved by always going left to right where equal hierarchy operators are involved (or always right to left). Since a binary operator requires two elements of a set, we can think of the operator working with all ordered pairs in that set; or, in other words S X S. For example, we could simply write c = + (a,b) to indicate the operator + which then describes how the element c is formed from elements a and b. Whereas this type of notation would be legitimate and indeed is somewhat equivalent to "Polish notation" we have become more accustomed to writing c = a + b. (The Polish notation is much easier to implement when writing machine language programs for computers.) Since we are talking about ordered pairs, an important property of a binary operation is whether the result is the same regardless of order. For example, is + (a,b) = + (b,a)? There are three principle properties of operators that will be dealt with here. Let + and be general operators. Commutative: An operator is said to be commutative if a + b = b + a. Associative: An operator is said to be associative if (a + b) + c = a + (b + c). Distributive: An operator () is said to distribute over another operator (+) if a (b + c) = (a b) + (a c). The last law could also apply to an operator being distributive over itself. Although it is not normally done, we could also define the properties of unary operators with respect to binary operators. There is always concern as to whether the sets contain all the elements needed for the operations. If the result of an operation on all the elements in a set results in an element which is also in the set, then the set is said to be closed with respect to that operation.

13

Chapter 2: Basic Concepts for use with Switching Theory

Problem 2.11. For this problem, let's consider that a binary operator may distribute over itself and define the distributive property for unary operators with respect to binary operators as being the property that: -(a+b) = (-a) + (-b). Consider the standard arithmetic and calculus operators: Unary Operators = {+, -, d/dt, dt} Binary Operators = {+, -, , , **} (** implies exponentiation). Place the binary operators across the top of a chart. Then place both unary and binary operators down the side, and show (with yes or no entries in the intersecting cells) whether they distribute over the binary operators (or themselves in the case of the binary operators) or not. 2.6. Mathematical Systems A mathematical system is a set M = {S,R,P}, where S is the set of elements involved, generally including constants and variables, R and P are sets of relations and postulates that define the operators and establish the rules for their application. In the next chapter, Boolean Algebra will be presented as such a system. Boolean Algebra provides the mathematical basis for most of what is done in switching theory. It is also the basis for studies in logic and in probability theory. An in-depth understanding of the algebraic manipulation will be beneficial to students not only in switching theory, but in many other areas as well. Before moving into this material, it is best to review the basic concepts involved in determining the size of sets. Keep in mind that functions are sets. These techniques will enable us to understand magnitudes better - something that is always important in solving problems. 2.7. Enumeration Since a set is a collection of elements, we may select elements from the set to make subsets, ordered pairs, triples, etc. When selecting from one or more sets, we would generally like to know how many different kinds of sets can be formed. As a matter of fact, this is the basis for the classical approach to probability theory, where the probability of a specific event is 1 divided by the total possible number of events (assuming they are all equally likely), and the probability of the occurrence of (one of) a subset of events is the number of events in that subset divided by the total number of possible events. One of the easiest ways to visualize the enumeration process is to consider the set of elements as being balls in an urn. When an element is selected from the urn, there are n possible selections (where n is the number of elements in the urn). If a second ball is drawn, then regardless of the ball that was selected first, there are n - 1 possible selections. We now consider the drawing as the selection of an ordered pair. The result is the pair (First Selection, Second Selection). In the first selection position, there are n possible elements. In the second position, regardless of which element was chosen first, there are n - 1 possible selections. The total possible number of ordered pairs is therefore n(n - 1). This basic process has been dignified and formally stated as the Fundamental Principal of Combinatorial Analysis or the Basic Combinatorial Principal: Given K sets, Sk, of size nk, the number of possible ordered k-tuples (s1, s2, ...., sk), where si is an element from the set i is the product of the ni for i = 1 to K. There are three ideas or problems that underlie all enumeration problems. These are: The size of the set of all ordered k-tuples from a set of size n. The number of permutations of a set of size k. The number of (unordered) subsets of size k from a set of size n.

14

Chapter 2: Basic Concepts for use with Switching Theory

The second of these is simply an extension of the first to include all elements of the set, and the third is merely an application of the first two. All are simply extended applications of the Fundamental Principal of Combinatorial Analysis. These three ideas or problems are now examined in greater detail. 2.8. Size of Ordered and Unordered Sets Consider a number of urns containing balls that are distinct in their characteristics. For example, we might have Urn 1 containing balls labeled a, b, c, and d. We can call the balls in the urn a 'population.' We can also name the set and enumerate its elements as S1 = {a,b,c,d}. The number of possible subsets of size 1 in the above set is obviously 4. If one element is to be selected from the urn, then it should be apparent that there are 4 possible selections; thus, the number of sets of size 1 is 4. 2.8.1. Sets of Ordered Sets In selecting balls from an urn, we can form sets of sizes 2, 3, etc. It becomes apparent that there are really two problems; one is the number of sets with replacement and the other is the number of sets without replacement. Notes: (a) For example, without replacement the number of possible sets of size 2 in the above example can be obtained by seeing that on the first draw we can have one of four possible balls. If we do not replace the ball, then on the second draw we see that we can draw one of three possible remaining balls. Since this would be true regardless which ball was drawn, the total number of different drawings that could occur would be 4 3 = 12. We can form them as follows: (a,b) (b,a) (c,a) (d,a) (a,c) (b,c) (c,b) (d,b) (a,d) (b,d) (c,d) (d,c) (b) If we replace the ball in the urn before drawing a second time, then we could also select one of four on the second drawing, obtaining a possibility of 4 4 = 16 different drawings which would yield: (a,a) (b,a) (c,a) (d,a) (a,b) (b,b) (c,b) (d,b) (a,c) (b,c) (c,c) (d,c) (a,d) (b,d) (c,d) (d,d) If we examine the elements of the set described in (a) above, we see that sets (a,b) and (b,a) are both present. In some problems, we find that these two ordered sets will be treated in the same way, and so we do not consider them as different. We are interested in the size of unordered sets. This is a bit tricky until we see that what we are interested in is first considering the size of the set of all ordered sets and then recognizing that these ordered sets consist of all unique permutations of the unordered sets. 2.9. Permutations The number of permutations of a set of elements is exactly the problem of drawing all the balls from the urn without replacement. Thus, (if there are M balls), the first draw could result in M possible events. For each of the first draws, the second could consist of M - 1 possible events. For each of the first two draws, the third could consist of M - 2 possible events, etc., yielding a total of:

15

Chapter 2: Basic Concepts for use with Switching Theory

M (M - 1) - - - 1 (The last draw has only one possibility since there is only one ball left.) So, we see that there are M factorial (M!) possible permutations of M elements. Or, restating in another way, every unordered subset of size M produces M! ordered subsets of size M. 2.10. Unordered Subsets If we now return to (a) above, we see that the number of unordered subsets x, of size 2 when permuted, would yield x 2! ordered pairs. Therefore, x 2! = 12 (which was obtained as 4 3) or x = (4 3) / 2! = 6. If we enumerate them we see that they are: {a,b} {a,c} {b,c} {a,d} {b,d} {c,d} Some authors will define the first M terms of N factorial as Nm and the formula above, for the general case, becomes Nm / M!. On the other hand, it really isn't necessary to define a new symbol since the missing terms of N factorial are precisely (N - M)!. Therefore, we can almost as easily write that Nm = N! / (N - M)!, having added the necessary (product) terms to complete the factorial to both the numerator and the denominator. Therefore, we can make the following statements: 1. The number of permutations of K elements is K! 2. The number of ordered subsets of size K in a larger set of size N is given by N! / (N - K)!. 3. The number of unordered subsets of size K in a larger set of size N is given by N! / (K!(N-K)!). This formula is also the formula for the coefficients obtained in the expansion of (X + 1)N and are called binomial coefficients. They are frequently denoted by the symbol: N N! K = K!(N - K)! They are also the numbers that generate Pascal's triangle, and the relationship exists that: N + 1 N N (a) M + 1 = + K M + 1
N where 0 is defined as 1. 0 When enumerating all possible unordered subsets of a set of size n, we may also look at the problem in a slightly different way. For example, in forming a subset we might look at each element in the population individually and flip a coin as to whether or not we include it in the set. For n elements, we have two choices on each element, giving us 2 2 2 = 2n. This includes the set of all elements and the set of no elements. This is basically the proof of (b) above, with M = 0 as the null set. Problem 2.12. Given a set of S = {a,b,c,d,e} so that the size = 5. a. How many ways can the set be ordered? b. How many ordered sets of size = 3 can there be? (drawing without replacement) c. How many unordered subsets of size = 3 (also without replacement) d. How many (unordered) subsets (of all sizes) are there?

(b)

=2 K

16

Chapter 2: Basic Concepts for use with Switching Theory

Problem 2.13. Given S = {1,2,3,4,5} and T = {1,2,3,4,5,6,7}. a. What is the size of S X T? b. Including and S X T, how many different binary relations are there, sRt? c. How many complete (different) functions are there f:S T? d. How many total functions (including complete and incomplete) are there f:ST? e. How many incomplete functions are there f:ST? f. How many complete invertible functions are there f:ST? Answer: a. The size of S X T is 5 7 = 35. b. The binary relations are subsets of S X T. To form an arbitrary subset, you may either select or not select each ordered pair. The number of possible resultant subsets is 235. You may also recognize this as being the total number of (unordered) subsets of all sizes. c. The key to answering this question is to visualize the set of ordered pairs that is the function under consideration and to use the Fundamental Principle of Combinatorial Analysis to arrive at the desired answer. There are 5 elements in S, and so the size of the domain is 5. Since complete functions are requested, each element from the domain will be in the relation. Since the relation is a function, each element will appear only once. All complete functions will contain exactly 5 ordered pairs, the first element of which is from S. The second element is from T, and you may think of selecting balls from an urn. There are exactly 7 elements which may be selected for t in the first ordered pair. For the second ordered pair, we may again select one of 7 (effectively selecting with replacement). The number of complete functions becomes 75. d. This is a bit trickier. However, the correct point of view makes the problem equally simple with part (b). Thinking in terms of the set of ordered pairs once more, we see that with the first ordered pair, we may select t to be one of 7 elements as before, or, since we are also interested in incomplete functions, we may leave this ordered pair out of the set entirely. This implies that there are really 8 choices. Indeed, this option exists for all ordered pairs yielding a total of 85 total different selections. However, we may not include the (single) option of selecting no ordered pairs since that would not yield a function. The total number of functions is 85 - 1. e. The number of incomplete functions is the number of total possible functions minus the number of complete functions, (85 - 1) - 75. f. We may arrive at the number of invertible functions in basically the same way as the number of complete functions in (b) above, except that once an element from the range has appeared in one ordered pair, it may not appear in another. This problem is therefore equivalent to selection without replacement. Thus: 76543 = 7! / 2! Problem 2.14. Given A = {1,2,3}, B = {1,2,3,4} with a A and b B a. Show the binary relations A x B and a < b. b. How many complete functions are there of A into B? Show one of them. c. How many incomplete functions are there of B into A? Show one of them. d. How many complete and invertible functions are there from A into B? e. How many complete and invertible functions are there from B into A?

17

Chapter 2: Basic Concepts for use with Switching Theory

2.11. Additional Problems For Chapter 2 Problem 2.15. Given the Universal Set = {0,1,2,3,4,5,A,B,C} and given: f = {0,1,5,A} and g = {3,4,5,A,B}. a. Find h = f I g b. Find j = f U g c. Find k = Complement of f d. Find l = f I k e. Find m = f U k Problem 2.16. There are thee urns, A, B, and C: A contains 8 distinct elements, B contains 6 distinct elements, C contains 5 distinct elements. a. How large is the set of ordered triplets where the first element of the triplet is from A, the second from B, and the third from C? b. How many ordered triplets can be constructed from an unordered triplet? c. How many unordered triplets could be obtained from Urn A's contents? Problem 2.17. Given that there are five wires being used to monitor on/off functions in a system, how many different conditions can be represented by the signals on these wires? Problem 2.18. Suppose that we are concerned with the five wires in Problem 2.17 and the various functions that can exist for the signals on the pairs of wires. a. How many pairs of wires are there? b. How many complete functions can exist for each pair? c. How many different total combinations of functions could there be? Problem 2.19. There are three sets, A, B, and C: The size of A is 5; The size of B is 3; The size of C is 4. a. An ordered set of size 3 is to be formed, selecting first one element from A, then 1 from B, then 1 from C. How many of these ordered sets will there be? b. How many ordered sets are possible, containing one element from each set if we permit the selection process to take place in any order - that is, the first element can be from A or B or C, the second from one of the two remaining, etc. Problem 2.20. For breakfast, a person decided to have a cereal, a fruit, bread or toast, and a drink (coffee, tea, milk or cocoa). He notices that there are three cereals to choose from and four fruits. Considering the bread and toast as different selections: a. From how many different combinations of items does the person have to select? b. If each item is consumed (assuming the selection has been made) independently as opposed to eating them all a bit at a time, how many different orders to consumption are there? c. If eaten together, how many combinations of cereal and fruit are there? Problem 2.21. The alphabet contains 26 characters. How many combinations are there consisting of: a. Five (5) letters with no repetition? b. Five (5) letters with repetition permitted? c. Unique sets (unordered) containing five (5) letters each?

18

Chapter 2: Basic Concepts for use with Switching Theory

Problem 2.22. A box has 10 push-button switches on it. It can be set internally for any combination or sequence of combinations for the functions described below. In each case, determine how many combinations there are from which to choose. a. Pressing 4 switches in the correct sequence. b. Pressing all 10 switches in the correct sequence. c. Pressing the correct 4 switches all at once. d. Pressing the correct switch or switches all at one time (not knowing how many are to be pressed, 1 to 10). e. Assuming the answer to (d) was 'n' and assuming further that any combination can be repeated, the correct depression of 4 such combinations. Problem 2.23. Given one variable that can take on 5 different values, a second variable that can take on 4 different values, and a box with one output that can take on 3 different values for any given input condition of the 2 variables mentioned previously: Considering the output as a function of the two input variables: a.. What is the size of the domain? b. How many different complete functions are there?

19

Chapter 3: Boolean Algebra: A Basis for Switching Theory

3. Boolean Algebra: A Basis for Switching Theory


3.1. Introduction and Orientation Boolean Algebra, originally developed by George Boole 2 as a Mathematical Theory of Logic, forms the basis for our switching theory. As in all areas of mathematics, there have been reorganization and reformulation with respect to postulates and theorem development. However, the principal concern here is that we have a 'clean' set of statements which are clearly valid in their representations of the phenomena we wish to study, and that they are convenient to use and discuss. There are many applications of Boolean Algebra, each of which yields some additional insight into the processes involved in the design of switching circuits. Three applications are presented here, and a fourth is presented in the appendix on basic probability theorems. The three to be used here are: Logic Diagrams (Boolean Rings, Venn Diagrams, Karnaugh Maps) Binary Valued Signals (and gates, or gates, etc.) Analog Switching (relays or electronic switches) 3.2. Logic Diagrams The Venn Diagrams introduced in the previous chapter are referred to as logic diagrams. They have great value in visualizing the ultimate meaning of logical statements formed with the words and, or, not, etc. They are also of considerable value in visualizing the concepts of union and intersection in set theory. Although there is some tendency to confuse the mathematics with the application (and vice versa), the methods of design in switching theory will draw from several areas of application. 3.3. Binary Valued Signals Binary valued signals are those which can take on only two values, generally referred to as 0 and 1. The values of 0 and 1 simply represent the two conditions that can exist. The conditions might be 'dc' voltage levels, they might be frequencies, they might be amplitudes of signals or phases of signals. In this text, most examples will be with 'dc' voltage levels, since they have a particularly simple representation, and are used extensively in digital circuits. Consider two binary valued signals (x,y) that can take on the values of 0 (perhaps 0 volts) and 1 (perhaps + 5 volts). Let the signals x and y vary with time as shown in Figure 3.3. There are two very simple circuits which take these signals as input signals, producing output signals that are highly useful in performing operations analogous to logical operations. Figure 3.1 represents a circuit called an or gate. The diodes are critical to the operation, as they form what is called a positive voltage follower.

Boole was born in Lincoln, England in 1815. Mathematical Theory of Logic was published in 1848. He expanded on his original work and applied it to the calculus of probabilities in The Laws of Thought in 1854. 20

2George

Chapter 3: Boolean Algebra: A Basis for Switching Theory

x y

x y
R

x 0 0 1 1

y 0 1 0 1

z 0 1 1 1

Figure 3.1. The Basic Or Circuit: z = x + y


+v

R
y x 0 0 1 1 y 0 1 0 1 z 0 0 0 1

x y z

Figure 3.2. The Basic And Circuit: z = x y The diode connecting x to z will act as a short circuit if the voltage x is higher (more positive) than z, and as an open circuit if x is lower (less positive) than z. The diode connecting y to z operates in the same fashion. The net result is that z will be at either the voltage of x or y, whichever is higher. In verbal terms, z will be 1 if either x is high, y is high, or both of them are high. The table in Figure 3.1 shows all possible combinations for x and y, and the value that z will take on under each condition. This table will be referred to as a Table of Combinations. It is identical to the truth table used in logic courses but is referred to in this text as a Table of Combinations to help in keeping the concepts separated. The differences between the use of truth tables in logic and their use in switching theory are a bit subtle but they are definitely there. Figure 3.2 shows a simple circuit that will perform as an and gate. The diodes are connected in this circuit in such a way that the output z will follow the most negative of x and y. This means that z will be high only if both x and y are high. The Table of Combinations for the and operation and xy should be verbalized as x and y. Figure 3.3 shows the resultant voltage developed at z when the values of x and y are as shown. (The symbol + will be used throughout this text to indicate the or operation. Therefore, the student should develop the habit of reading "x + y" as x or y. The word "plus" will find very little use in this text.)

21

Chapter 3: Boolean Algebra: A Basis for Switching Theory

x y

0 0

0 1

1 0

1 1

0 0

z=x+y 0 z=x y 0

1 0

1 0

1 1

0 0

Figure 3.3. Electrical Signals and Their Or and And Equivalence


3.4. Relays and Analog Signal Switches The first major application of switching theory came in the telephone central office where large racks of relays operated as primitive computers (but in many ways, not so primitive). The basic relay in many simple circuits is represented by the schematic as shown in Figure 3.4.
nc no nc no

nc no nc no
nc x

relay unenergized

relay energized

x x "detached" contacts

no

Figure 3.4. Relay Circuits with "Attached Contacts" The schematics on the left yield a realistic symbol where the coil represents the electromagnet. When the electromagnet is energized, an iron "armature" is drawn downward causing the switch settings to change. The back contacts, called normally closed contacts (nc), are closed when the relay is down (or unenergized), forming continuity for the associated circuits. When the x relay pulls in (comes up), the circuit created by the back contacts opens and the circuit closes to the forward contacts, called normally open (no) contacts. In circuits with a large number of relays, these schematics became too difficult to read and the need for simplification led to the "detached contact" schematics which are still used today, not only for relay circuits, but also for electronic analog signal switches. The normally open contacts are represented with an X and labeled with the name of the relay. The normally closed contacts are represented with a and are labeled as the complement of the name of the relay. Figure 3.5 shows both the "attached" and the "detached" schematics for or and and circuits with respect to continuity.
22

Chapter 3: Boolean Algebra: A Basis for Switching Theory

x x x x

x y

x+y x y x y

Figure 3.5. Or and And Relay Schematics


Problem 3.1. Assume + is a symbol representing or and is a symbol representing and Draw: a.) Venn Diagrams, b.) Diode Circuits, c.) "Attached" contact relay circuits and d.) "Detached" contact circuits representing the following functions: a. (x + y ) z b. (x y ) + (y z ) Problem 3.2. Draw a detached contact circuit for: a. (x y ) + (x y ) b. (x + y ) (x + y ) There have been several books written on switching theory as it applies to relays. A very important aspect of the design of large relay circuits is the minimization of the number of relays required and the minimization of the number of contacts. This text will not look at this topic further, but we shall return to relay circuits or the equivalent analog signal switching circuits from time to time to help in understanding some of the concepts. 3.5. Concepts of Complementation In the Venn diagram (Figure 3.6) the Universe will represent 1 and the null space will represent 0. Since the complement of x is x and everything is either x or x , then it is reasoned that x U x = 1 . Since no element can be in both x and in x , then the intersection xI x=0. We will also use this diagram with or and and, stating that x + x = 1 and that x x = 0 . X x x X x x x x x x+ x x x 1 0 1 0 1 0 1 0

Figure 3.6. Three Representations of x, x , 0 and 1

23

Chapter 3: Boolean Algebra: A Basis for Switching Theory

With digital signals, the complement of a signal is obtained through its inversion. When x is high, x will be low and vice versa. Since one of the two is always high, the output of an or gate with inputs x and x will always be 1, (x + x = 1). Since there is never a time when both x and x will be high at the same time, the output of an and gate with inputs x and x will always be zero (x x = 0). With relays, the normally closed contact is labeled x and the normally open contact is labeled x. Except for a very small time when the relay is "coming up" or "falling out," either x has continuity or x has continuity. From a practical point of view there are actually times when it is desirable to have "make before break" and "break before make" type adjustments, but these are made to simplify the circuits or to improve reliability. The design is performed under the assumption that a series connection of x and x contacts will never have continuity (x x = 0) and that a parallel connection will always have continuity ( x + x = 1). We will see shortly that x + x = 1 and x x = 0 are two of the postulates of Boolean Algebra. 0 and 1 for the switching specialist are ground and power voltage respectively. For the logician, 1 is the Universe and 0 is the Null space. It is now time to examine Boolean Algebra and see if the resultant mathematical system can be applied to these systems. 3.6. Boolean Algebra The Boolean Algebra can be established with different sets of postulates. The ones chosen here are a bit cleaner conceptually in application to switching theory. Let B = {S, R, P} be a Boolean Algebra. S will consist of two constants = {0,1} and any number of variables that can take on the values (and only the values) of the constants. There are four symbols {=, , +, }, where = represents the equivalence relation and carries will be referred to as not and represents the inversion or the substitution property, complement operator, + represents the or operation and represents the and operation. The two operators {+, } are binary operators with the following properties: Postulate P1: They are commutative a. a + b = b + a b. a b = b a Postulate P2: They distribute over each other a. a + (b c) = (a + b) (a + c) b. a (b + c) = (a b) + (a c) Postulate P3: The two constants act as identity elements with respect to the operators a. a + 0=a b. a 1=a Postulate P4: For every element a there exists an element a a. a + a =1 b. a a =0 Before proceeding further, we need to know the relevance of the postulates to the circuit elements already discussed. Postulate P4 has already been discussed and does hold. Postulate P3 is also quite evidently true from the basic definitions. Postulate P1 is rather trivial, since in the case of the Venn diagrams, union and intersection do not change if the names of the circles are changed. Obviously with the diode circuits, the inputs are identically connected; again, changing the inputs around could not affect the operation. Also with relay contacts, changing the contacts around in simple series and parallel circuits cannot affect the operation of the circuits.
24

Chapter 3: Boolean Algebra: A Basis for Switching Theory

This leaves only Postulate P2. However, this postulate is very important, as well as very intersecting. The student will recognize Postulate P2(b) as being the same as in standard arithmetic, which states that multiplication is distributive over addition. It is this property that allows us to extract factors and to "multiply them out." It is comforting to know that this operation is valid and that something so familiar can be used without worry. However, Postulate P2(a) looks very strange indeed. It does not work with standard arithmetic - this is why it looks strange. However, it is equally powerful with P2(b) and deserves our attention. First, let us observe P2a with respect to Venn Diagrams.

a c

a c

Figure 3.7a: a + (bc) Figure 3.7a: (a + b) (a + c) Figure 3.7. Distributive Law P2(a) with Venn Diagrams By observing the Union of a with the intersection of b and c, and comparing the result with the intersection of (the union of a or b) and the (union of a + c), we see that the resultant areas are identical. We now turn to relays.
X a X b X c X a X b X a X c

Figure 3.8. Distributive Law P2a with Relay Circuits The problem here is to see if these circuits are functionally equivalent. The best way to do this is to construct the Table of Combinations for both circuits and compare the results on a one-to-one basis. This method is fundamentally identical to the Method of Perfect Induction used in logic where all possible conditions are examined in a Truth Table. The simplest way to set up a Table of Combinations is to set up the variables and use a binary sequence to guarantee that all possible conditions are tested. If the two circuits respond the same for all possible input conditions, then they are equal (thus satisfying the equivalence relation).

25

Chapter 3: Boolean Algebra: A Basis for Switching Theory

abc 000 001 010 011 100 101 110 111

bc a+(bc) a b c a+b 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 1 0 0 1 0 1 1 0 1 1 0 1 1 1 0 1 1 1 1 1 1 1 Figure 3.9. Proof of P2(a) by Table of Combinations

a+c (a+b)(a+c) 0 0 1 0 0 0 1 1 1 1 1 1 1 1 1 1

Finally, we turn to digital signal analysis. Figure 3.10 shows the construction of three signals that effectively forms a Table of Combinations.
a b

b c a+(b c) a+b a+c (a+b) (a+c)

Figure 3.10. Distributive Law P2(a) with Digital Signals Again, we see that a + (b c) is identical to (a + b) (a + c). Since we now know that the basic postulates represent our "ideal" system (no time delays, etc.), we may proceed to develop our mathematical ability with these postulates and have confidence in the results we obtain. In the real world, we have to worry about the situations in which mathematics is not a perfect representation of our system. We know that it is not possible for a voltage to go from 0 to 1 without passing through intermediate values. This means that our very basic assumption that there are only two constants is invalid. Also with digital signals, we obtain x from x by passing the signal through an electronic inverter. There is some delay in forming x and the result, somewhat magnified, is shown in Figure 3.11.

26

Chapter 3: Boolean Algebra: A Basis for Switching Theory

From this, it is seen that x + x is not always 1 and that x x is not always 0. Another very important postulate does not always hold. The failure of a real world system to meet the postulates of the mathematical system being used to describe it creates problems called hazards. For digital systems, these hazards have been studied in great detail and there are some simple techniques that can be applied to avoid problems. The specific problems that require attention will be discussed in a later chapter. In the meantime, the student may comfortably assume that the mathematical system applies.
x x

x+ x x x

Figure 3.11. Consequences of Delays in Signal Inversion Since we know that Boolean Algebra is a valid model for the systems of interest, it is possible to use it as our mathematical system for logical circuit design. It is best that we become proficient in its use. In mathematics, the development of a system occurs through the development of theorems. Although most of the theorems can be proved through the method of perfect induction, the postulates may be used more efficiently to develop theorems, and those theorems can be used to develop more theorems, etc. 3.7. Some Boolean Algebra Theorems The method of perfect induction can always be used to determine whether or not two expressions represent the same function. However, this is a very cumbersome process. One of the advantages of an algebra is the ability to manipulate expressions symbolically to gain the desired understanding. In a course where the mathematics is emphasized, theorems develop in a very formal fashion using the postulates and previously developed theorems. Although this text is intended to be more along the lines of an engineering text, it is to our advantage to work with the postulates and theorems in order to understand them better and to appreciate their importance to our work. Theorem 3.1: Indempotent Laws: Theorem 3.1a: a + a = a Theorem 3.1b: a a = a a=a+0 P3a a=a 1 P3b = a + (a a ) P4b = a (a + a ) P4a = (a + a) (a + a ) P2a = (a a) + (a a ) P2b = (a + a) 1 P4a = (a a) + 0 P4b = (a + a) P3b = (a a) P3a

27

Chapter 3: Boolean Algebra: A Basis for Switching Theory

Theorem 3.2a: a + 1 = 1 Theorem 3.2b: a 0 = 0 a + 1 = 1 (a + 1) P3b a 0 = 0 + (a 0) P3a = (a + a ) (a + 1) P4a = (a a ) + (a 0) P4b = a + ( a 1) P2a = a ( a + 0) P2b =a+ a P3b =a a P3a =1 P4a =0 P4b Theorem 3.3: Absorption Laws Theorem 3.3a: a + (a b) = a Theorem 3.3b: a (a+b) = a a + (a b) = (a 1) + (a b) P3b a (a+b) = (a+0) (a+b)P3a = a (1 + b) P2b = a + (0 b) P2a = a (b + 1) P1a = a + (b 0) P1b =a1 T3.2a =a+0 T3.2b =a P3b =a P3a The absorption laws are particularly useful in the process of obtaining simpler expressions. We can examine them with respect to Venn Diagrams.

(a b) (a + b) Figure 3.12. Venn Diagrams for (a b) and (a + b) In Figure 3.12a, we see that (a b), which is the intersection of a and b, must be a subset of a. The union of any set with a subset of itself cannot contain any more or less than the set itself. Similarly in Figure 3.12b, the union of a with b will contain all of a. Then the intersection of a with that union will contain all of a and nothing more. The diagram 3.12a provides a view of another concept that should be discussed at this time. This concept is coverage. We see that a contains all intersections of itself with other variables. We can therefore speak about "a covering its subsets." In a + (a b), we say that since a covers (a b), the term (a b) adds nothing more to the expression, and may be removed. Whereas the concept of covering is quite appropriate to Figure 3.12a, the concept of coverage is carried over as an abstraction to Theorem 3.3b. In this expression, an intersection is required. The intersection must be equal or less than the smallest area, and so it is said that with a (a + b), the term (a + b) is covered by a and is not needed. The concept of coverage is important when it comes to building a function or a system. We will speak of covering subsets in much the same way we speak of meeting specifications.

28

Chapter 3: Boolean Algebra: A Basis for Switching Theory

A term that comes from logic and probability theory and is directly related to these concepts is the term "implies." The concept of implication is that when an event within a subset has occurred, then an event in the set has occurred. Again, in Figure 3.12a, if an event in ( a b) has occurred, then surely an event in a has occurred. The phraseology used is that (a b) implies a. In switching theory, we also say that the term (a b) is an implicant of a. With respect to a (a + b) = a, the (a + b) term is said to be an implicate of a. When an intersection of sets is taken, the resultant will always be smaller than (or possibly equal to) the smallest. Therefore, the implicate is larger than (or possibly equal to) the resultant intersection. The resultant could imply the implicate, but the implicate cannot in general imply the intersection. Another useful word is subsume. If a term t1 has all the literals of term t2 (and maybe even more), then it is said that term t1 subsumes term t2. For example, (a b) subsumes a. Also, (a + b) subsumes a. Theorem 3.3 assures us that we may drop subsuming terms from expressions without changing their meaning. There are many more theorems that have been developed, but not all of them are of great use in switching theory. Some important theorems follow: Theorem 3.4: The binary operators of Boolean Algebra are associative. Theorem 3.4a: a + (b + c) = (a + b) + c. Theorem 3.4b: a (b c) = (a b) c. This property is very obvious from observing Venn Diagrams, relay circuits and gate circuits and could be stated as a basic postulate with regard to the system. It is stated as a theorem however, since it is derivable from the postulates (although through a rather awkward procedure). Theorem 3.5: Dual systems may be formed by: a. replacing each binary operator with the other, and by b. replacing each identity element with the other. The proof of this is by induction. The principal reason for the format used in stating the postulates was to bring out this property. Whenever a postulate (a) was stated, a postulate (b) was stated as the dual of postulate (a). An examination of those proofs show that for any proof with one expression, there will follow a proof on a dual expression in exactly the same sequence using the dual postulates. The duality of Boolean Algebra is a powerful property for helping the student to understand the mechanisms in the next chapter. It will be referred to frequently. Theorem 3.6: DeMorgan's Law Theorem 3.6a: a + b = a b Theorem 3.6b: (a b) = a + b Problem 3.3. Prove DeMorgan's Law using the postulates of Boolean Algebra and the previously developed theorems. Hint: Both of the postulates defining the inverse element must hold.

29

Chapter 3: Boolean Algebra: A Basis for Switching Theory

Problem 3.4. Show by Venn Diagrams digital signals and a table of combinations that DeMorgan's Law is valid for these systems. DeMorgan's Law plays a very important role in switching theory and will be used heavily in the next chapter. Theorem 3.7. Theorem 3.7a: a + ( a b) = a + b Theorem 3.8a: a ( a + b) = a b Problem 3.5. Prove Theorems 3.7a and 3.7b stating which postulates and theorems you have used. Theorem 3.8. Reduction/Expansion Theorem Number 1. Theorem 3.8a: (a b) + ( a b) = b Theorem 3.8b: (a + b) ( a b) = b Problem 3.6. Prove Theorems 3.8a and 3.8b stating which postulates and theorems you have used. Theorem 3.8 is the basis for the Quine-McCluskey minimization procedure to be discussed in the chapter on minimization. When used in reverse, it is also the basis for expanding terms to contain more variables that must be done before minimization can take place. 3.8. Additional Problems for Chapter 3 Problem 3.7. Given the signals below with positive going logic, draw the signals that would result from an implementation of the expressions. a

c=a+ b d= a b Problem 3.8. Given the three signals below as positive going signals, show the signals that would be present for the functions requested.
a

d=ab e=bc f = a + (b c ) g=d+e h=d e

30

Chapter 3: Boolean Algebra: A Basis for Switching Theory

Problem 3.9. Assume you have relays that will follow the above signals. Draw the circuits that will produce signals d through h. Problem 3.10. Draw circuits using and-gates, or-gates and inverters to realize all signals d through h in Problem 3.8. Problem 3.11. Answer the following as true or false. a. A negative voltage follower with two 2 inputs will produce an and gate with positive going logic. b. A positive voltage follower with two inputs will produce an or gate with negative going logic. Problem 3.12. Given the circuit below with the signals shown to the right of it, sketch the signals (a,b,c) that will result.
x y x y a c b

x y

Problem 3.13. Given f(x,y,z) = x+(y x ) + ( y z ), a. Draw a Venn Diagram that would represent f(x,y,z). b. Draw a relay circuit (detached contacts) that would provide continuity for the above function. c. Draw a circuit using and gates and or gates that would develop the function from signals. Problem 3.14. Write an equivalent algebraic expression (function) for the associated pictures below.
a b

a c

c d
x x x z x y x y

Problem 3.15. Draw circuits to yield f = (x y) + z + (x z ). a. Use and gates and or gates. b. Use relays. c. Draw the Venn Diagram for the function.

31

Chapter 3: Boolean Algebra: A Basis for Switching Theory

Problem 3.16. Given f = ( x + y + z ) (w + x + (y z )) (x + ( y z)) a. Draw a gate circuit to realize this function. b. Draw a detached contact relay network to realize this function. Problem 3.17. Using the postulates and theorems introduced in this chapter, prove that the left-hand side equals the right-hand side for the equations below. a. x + ( x y) = x + y b. x + (y z) = x + y + z

c. (x y) + (( x y ) z) = (x y) + z d. (x + y) (x + y + z ) = (x + y) (x + y + z) e. (a + b + c) (a + b + c ) = (a + b) f. (a + b) (( a + b ) + c + d) = (a + b) (c + d) g. (x y) + ( y z) = (x + ( y z)) (y + z) Problem 3.18. Using the postulates and theorems introduced in this chapter, prove that the left-hand side equals the right-hand side for the equations below. Show which postulates you have used. a. y +(x y ) = x + y b. (x y ) + (x z) + (x y z ) = x c. (x y) + (x y ) + (x z) + ( x z) = x + z d. ( x + z) ( x + y) ( x + z + y ) = x Hint: Use Theorem 3.8 "in reverse" e. (x z) + (y z) + ( x y) = (x z) + ( x y) f. ( x y z ) + ( x y z) = x + y g. ( x + y) (y + z ) (x + z ) = ( z + y) (x + z ) Hint: Use Theorem 3.8. "in reverse" Problem 3.19. Use DeMorgan's Law to find an expression for f in which there are no complemented expressions or terms other than literals. a. f (x,y,z) = (x y) + ( x y z) b. f (x,y,z) = (x + y ) (y + z ) c. f (x,y,z) = ((x y ) + z) (y +(x z + y )} d. f (x,y,z) = ((x + y) ( y + z)) + (x ( y z )) e. f (x,y,z) = (x y ( y + z )) + x + y f. f (w,x,y,z) = (w ((x y ) + z)) + (x ( w + ( y z))) g. f (w,x,y,z) = w + ( x ( w + z ) y) + ( y z) h. f (w,x,y,z) = ((w x) + y) (( x + z ) ( w y ) (( x y ) + z ))

32

Chapter 4: Switching Theory for Combinational Circuits

4. Switching Theory for Combinational Circuits


4.1. Introduction In the last chapter, Boolean Algebra was introduced as the mathematical basis for switching theory. In this chapter, techniques for efficient use of that algebra are introduced, along with a substantial number of illustrations and problems. The need to program computers for a variety of uses has led to considerable attention being focused on the structure of languages. As a result, many ideas and concepts that have been accepted in the past as a part of accumulated mathematical maturity are being formalized. A considerable amount of notation has already been used without formal definitions - hopefully without too much confusion. However, for efficient use of paper, time and brain power it is now better to be precise and reasonably formal. 4.2. Definitions of Switching Functions and Expressions 1. A switching system is a system that can be modeled by a Boolean Algebra = {S,R,P}. 2. Constants: The constants for Switching Theory are 0 and 1 (the identity elements of Boolean Algebra). 3. Variables: Elements (of S) that take on either of the values 0 or 1 depending on the circumstances. Input or Independent variables: those variables that are the input to the switching system. Dependent variables: those variables formed by operations on the input variables. Output variables: those variables presented to other systems from this system. 4. The Universe of a switching system will consist of S containing: a. The constants {0,1}. b. The input variables (or independent variables). c. All possible functions of the input variables. These functions may also be given variable names and used as variables, but they are "dependent variables. 5. Input State: At any point in time, each of the input variables will be either 0 or 1. For efficiency, the input variables will be considered as an ordered n-tuple; for example (u,v,w,x). The condition (value) of the input variables at any time will also be an ordered n-tuple, with 0's or 1's representing the respective value of the input variables. For example, (0,1,1,0) or (0110 since commas are unnecessary) represent the case for u = 0, v =1, w = 1, x = 0. The condition on the input variables, represented by this ordered n-tuple of 0's and 1's, will be referred to as the input state. 6. Output State: The value of the output variables at any point in time (an orderedset if there are more than one) is referred to as the output state. 7. Switching Function: A function in the switching system will consist of a nonempty set of ordered pairs, the first represents an input state (a particular condition on the input) and the second represents the output state that occurs with that particular input state. A function may have only one output variable, but a system may have many functions. The most perfect representation of a switching function will be f(input variables) = Boolean expression. For example, f(x,y) = x + y. In algebraic form, the expression will serve as the rule or formula that enables us to determine for which elements of the domain (input state) the function takes on the value of 1.
33

Chapter 4: Switching Theory for Combinational Circuits

8. Domain: The domain of a switching function will be all possible input states. 9. Range: The range of a switching function is the constant set {0,1}. 10. A complete function is one in which the output state is specified for every possible input state. 11. An incomplete function is either a function where one or more input conditions cannot occur, or (during design phase only) where an output may be either a 0 or 1 (unspecified) for one or more input conditions. 12. Operator: The symbols used to imply the process by which another element of the set may be formed from one or two other elements. Although the binary operators + and and the unary operator are defined in the postulates of Boolean Algebra, other operators will be defined. Other operators can always be defined in terms of +, and . These symbols will be read as or, and and not, respectively. 13. Operands: The elements that enter into an operation are call operands. 14. Product: The use of the operator shall be referred to as "taking the product" of the operands. The resultant element will be referred to as the "product" of the operands. Since a1 = a, every element can be said to be a product of itself with 1. Therefore, we may talk of "the product of one or more elements." 15. Conjunction: A synonym for product. The conjunction of two variables x and y is xy. 16. Sum: The use of the + operator will be referred to as "taking the sum" of the operands. The resultant element will be referred to as the "sum" of the operands. Since a+0 = a, every element may be considered as a sum of itself with zero. Then we may talk of the "sum of one of one or more elements." 17. Disjunction: A synonym for sum. The disjunction of two variables x and y is x+y. 18. Expression: A meaningful assembly of variables and operators to form an algebraic representation to a function. Note that we could have defined an expression as any collection of operators and variables and then had valid expressions and invalid expressions. However, invalid expressions are of no interest to us and, if it is not meaningful, it will not be considered an expression. It is desirable to have a set of rules for the development of an expression. These rules will be called the Rules of Assembly. 4.3. Rules of Assembly Let operator be the name of either Boolean binary operator. Let the symbol represent the concatenation of names and operators. (The process of appending characters or symbols to the right side of an existing {ordered} collection of symbols and characters.) 19. Literal: A literal is the name of a variable; or, a literal is the name of the complement of a variable. In this text, all literals will be single letters of the alphabet or single letters with a bar over them. For example, x is a literal and x is a literal. Either of these literals is said to be a representative literal of the variable x. Note that constants are not literals. 20. Term: A term is a literal; or a term is a literal1 | operator | literal2; or a term is term1 | operator | term2, providing term1 and term2 contain only the same binary operator. Examples include: x, x y , x y z . Note: The following are not terms: (ab) + c, x + 0 , 0, 1, 0 x, x + 0, x + 1. Again, note that constants are not literals. 21. Product term: A product term is a literal or a conjunction of literals. A product term is when the binary operator, if any, is the and operator ().
34

Chapter 4: Switching Theory for Combinational Circuits

Examples include x, x y , x y z . 22. Sum term: A sum term is when the binary operator, if any, is the or operator (+). We may also say that a sum term is the sum of one or more literals. A sum term is a literal or disjunction of literals. Examples include x, x + y , x + y , x + y + z . Note: Product terms and sum terms do not represent mutually exclusive categories, since a single literal qualifies as both a product term and a sum term. With two or more literals present, a term cannot be both a sum term and a product term. However, since a single literal can be considered as both a product term and a sum term, a product term which consists of the product of literals can be considered as a product of sum terms, and a sum term, which is a sum of literals, can be considered as a sum of product terms. 23. Normal term: A normal term is one in which no variable is represented more than once. For example: a+b is a normal (sum) term, but a+b+a is not, nor is a+b+ a . 24. Canonical term: A normal term in which all of the input variables are represented by a literal. For example, in a system (x,y,z), x y z and x + y + z would be canonical terms. 25. Expression: An expression is a constant; or An expression is a term, or An expression is expression1 | Operator | expression2, or An expression is the complement of an expression. Problem 4.1. Determine if the following are literals, terms (if so, normal or not, canonical or not, and product or sum) or expressions. Consider them to be associated with a system with input variables (x,y,z). a. x b. x y c. x + y d. a + x + y e. x ( y + z ) f. x + (+ x y) g. x y z h. x + y + z i. x + ( y + z ) j. 1 x y 4.4. Establishment of a Hierarchy for Operators Parentheses will be used to establish the order in which the operations are to be performed. If the order that operations are to be performed can result in different answers, then the correct order must be established by using parentheses. An expression inside parentheses must always be evaluated before it is used with operators external to the parentheses. (This is the standard procedure learned by students in early algebra courses.) In order to reduce the number of parentheses required, an additional rule is followed: and operators take precedence over or operators. Thus, (abc)+d may be written abc+d. This rule is also consistent with the hierarchy of algebra. The unary operator (not or complement) takes precedence over the binary operators. Note that when the unary operator operates on an expression containing binary operators, that expression is evaluated first and the complement is then taken. For example, a b + c implies the complement of ((ab)+c).
35

Chapter 4: Switching Theory for Combinational Circuits

Although not a hierarchy rule, requiring that variables be represented by a single character permits us to simplify our expressions further by allowing us to drop the symbol. For example, abc can be represented as abc unambiguously. The symbol for the and operation will only be used when an emphasis of the operator is desired. 4.5. Forms for Expressing Functions A function can be defined by a table of combinations or by an algebraic expression. The domain will normally be obvious from the expression; that is, the input variables are those variables found in the expression. If the domain has variables not found in the expression, then use a notation that will bring this into evidence; for example f (x,y,z) = x. In this text, the latter notation will generally be used. But if it is omitted, the student may assume a domain including only the variables represented by literals. There are two basic ways of "building" functions. One way is to sum product terms that represent a union of areas on the Venn Diagram. The other is to take the product of sum terms that represents taking the intersection of areas on the Venn Diagram. For example: f(x,y,z) = x + yz f(x,y,z) = (x+y)(x+z) These two algebraic expressions represent the same function. The second form can be obtained from the first using Postulate P2a - the Distributive Law for + over . The first form can be obtained from the second by using Postulate P2b - the Distributive Law for over +. Other theorems and postulates may be used to simplify the expression. However, it is principally the distributive laws that allow us to convert directly from one form to the other. Problem 4.2: Convert the following expressions to sums of product terms, using postulate P2b. a. (x + z )( y + z ) b. ( x + y + z)(x + y + z ) Problem 4.3: Convert the following expressions to product of sum terms, using postulate P2.a. a. x y + x z b. x yz + x y z These two basic forms for functions are given the following names: the sum of products forms will be called disjunctive forms; the product of sums forms will be called conjunctive forms. Expressions that are not in one of these two forms are said to be in mixed form. 26. Disjunctive Normal Form: An expression which is a normal product term or a sum of normal product terms is said to be in disjunctive normal form, abbreviated dnf, and is called a disjunctive normal formula. The following examples are expressed in dnf: a. f(x,y,z) = x c. f(x,y,z) = x + y z b. f(x,y,z) = x + y + z d. f(x,y,z) = xyz 27. Disjunctive Canonical Form: If every product term in a disjunctive normal formula is a canonical term, then the expression is said to be in disjunctive canonical form, abbreviated dcf, and called a disjunctive canonical formula. The following examples are expressed in dcf: a. f(x,y) = xy c. f(x,y,z) = xyz d. f(x,y,z) = x y z + xyz b. f(x,y) = x y + xy
36

Chapter 4: Switching Theory for Combinational Circuits

28. Implicant: An implicant of a function is a normal product term that implies the function. Note that a function covers all of its implicants. Given f(x,y,z) = x + y z, the following are all implicants of f (but not all of the implicants of f): a. x c. x y z d. y z b. x y 29. Theorem 4.1a: The sum of all of the implicants of a function cover the function. Note that all of the implicants to cover a function are seldom needed. For example, f(x,y,z) = x + y z is covered by the implicants x and y z. However, there are many normal product terms which imply the function, and if they were all used, all of the areas of the Venn Diagram would be covered at least twice. 30. Irredundant Cover: An irredundant dnf cover for a function is a sum of implicants that cover the function, and the removal of any one of which will destroy the cover. Note that this does not mean that all of the areas of the Venn Diagram will have only one cover. Examples for f(x,y,z) = x + y z include the following: c. f(x,y,z) = x z + xy + y z a. f(x,y,z) = x + y z b. f(x,y,z) = x y + xy + y z d. f(x,y,z) = x + x y z Note that x + xy + y z would be redundant, as the term xy could be removed without changing the coverage. When we discuss hazards, we will observe that redundant coverage is sometimes desirable, even necessary. However, in most digital circuits (that is, those where pulses are used to guarantee that signals are stable before they are tested), redundant coverage results in unnecessary expense and a loss in reliability. Although minimal cost circuits (except for the hazard-free circuits) will always be selected from the set of irredundant covers, irredundant coverage by itself does not guarantee a minimum cost network, obvious from the examples above. Techniques exist for assuring that a circuit will have minimum cost and they will be covered in Chapter 5. Problem 4.4. For the following function, use Venn Diagrams to determine if the cover is irredundant. a. f(x,y,z) = x z + x z + x y b. f(x,y,z) = xy + xz + y z 31. Conjunctive Normal Form: An expression that is a normal sum term or a product of normal sum terms is said to be in conjunctive normal form, abbreviated cnf, and is called a conjunctive normal formula. The following are examples of functions expressed in cnf: a. f(x,y,z) = x d. f(x,y,z) = x( y + z) b. f(x,y,z) = x + y + z e. f(x,y,z) = xyz c. f(x,y,z) = xy f. f(x,y,z) = (x + y)( y + z ) 32. Conjunctive Canonical Form: If every sum term in a conjunctive normal form is a canonical term, then the expression is said to be in conjunctive canonical form, abbreviated ccf, and is called a conjunctive canonical formula. The following functions are expressed in ccf: a. f(x,y) = x + y c. f(x,y,z) = x + y + z d. f(x,y,z) = (x + y + z ) (x + y + z) b. f(x,y) = (x + y ) (x + y) 33. Implicate: An implicate of a function is a normal sum term that is implied by the function. With conjunctive forms, the concept of coverage is concerned with the
37

Chapter 4: Switching Theory for Combinational Circuits

areas omitted, called the zeros of the function. Since each intersection may create more zeros, we will refer to covering the zeros. If an implicate omits equal or more areas than another, then it is said to cover that implicate. Conceptually, a function covers all of its implicates, and if two functions cover each other, they are equal. Given f(x,y,z) = x ( y + z), the following are all implicates of f (but not all of the implicates of f). a. x c. x + y + z d. y + z b. x + y 34. Theorem 4.1b: The intersection of all the implicates of a function cover the zeros of that function. Again, it is possible to have multiple coverage. For example, f(x,y,z) = x( y + z) is covered by the implicates x and ( y + z). However, there are many normal sum terms implied by the function, and if they were all used, all of the zero areas of the Venn Diagram would be covered at least twice. 35. Irredundant cnf cover: An irredundant cnf cover for a function is a product of implicates that covers the function's zeros and the removal of any one of which will destroy the cover. Note that this does not mean that all of the zeros will be covered only once. Irredundant covers of f(x,y,z) = x( y + z) include the following: c. f(x,y,z) = (x + z) (x + y) ( y + z) a. f(x,y,z) = x ( y + z) b. f(x,y,z) = (x + y )(x + y)( y + z) d. f(x,y,z) = x( x + y + z) Note that x(x + y) ( y + z) would be redundant, as the term (x + y) could be removed without changing the coverage of zeros. Problem 4.5. For the following functions, use Venn Diagrams to determine if the cover is redundant. a. (x + z ) ( x + z) (x + y) b. (x + y) (x + z) (y + z ) 4.6. DeMorgan's Law Revisited Theorem 3.6 (Repeated) Theorem 3.6a: a + b = a b Theorem 3.6b: a b = a + b Theorem 4.2a: If f is in dnf, then if f is found using DeMorgan's Law, f will be in cnf. Theorem 4.2b: If f is in cnf, then if f is found using DeMorgan's Law, f will be in dnf. Theorem 4.3: Generalization of DeMorgan's Law. Expressions of any degree of complexity may be simplified through the following generalized process: a. Insert all implied operators. b. Insert parentheses so that there are no implied hierarchical operations. c. Change all binary operators with an odd number of bars above them. (Replace them with the other binary operator.) d. Complement all literals with an odd number of bars above them. Problem 4.6. Use DeMorgan's Law to find f . The final expression should not have any complemented expressions or terms other than literals. It may be left in mixed form. a. f(x,y,z) = x y + y z b. f(x,y,z) = (x + y) (y + z)
38

Chapter 4: Switching Theory for Combinational Circuits

c. f(x,y,z) = x y z + x (yz) d. f(x,y,z) = (x + y) + xz yz e. f(x,y,z) = x ( xy(x+z) + ( x + z ) + ( x + y + z )) 4.7. Converting To Canonical Form Theorem 4.4 is the principal theorem for changing normal terms which are not canonical into canonical terms. Theorem 4.4: Algebraic Expansion Theorem Theorem 4.4a: a + b = a + b + c c = (a + b + c) (a + b + c ) Theorem 4.4b: a b = a b (c + c ) = a b c + a b c Problem 4.7. Convert the following functions from dnf to dcf a. f(x,y,z) = xy + y z b. f(x,y,z) = x c. f(x,y,z) = y + x z Problem 4.8. Convert the following functions from cnf to ccf a. f(x,y,z) = (x+y)(y+z) b. f(x,y,z) = x c. f(x,y,z) = y ( x + z) The definitions and theorems presented to this point provide us with the necessary tools for algebraic manipulation. A pictorial view of the processes is presented in Figure 4.1. This figure is divided into f above the center and f below the center. Both functions can be expressed algebraically in disjunctive, conjunctive or mixed forms. The overlapping conjunctive and disjunctive forms show that for some functions the forms are not mutually exclusive. The lines and arrows represent the application of the principal postulates or theorems used in changing the algebraic expression from one form to another. Problem 4.9. For the following functions, find both f and f in dnf, cnf, dcf and ccf. a. f(x,y,z) = x y z b. f(x,y,z) = x + y + z c. f(x,y,z) = xy + y z d. f(x,y,z) = (x + y) ( y + z) e. f(w,x,y,z) = (w + x) y ( w + z) f. f(w,x,y,z) = w xy + w (x+ z )

39

Chapter 4: Switching Theory for Combinational Circuits

dnf

cnf

dcf F 5,2 2 1

ccf

6,1

4 4

5,2

6,1

F dcf ccf

dnf

cnf

1. a + (b c) = (a+b) (a + c) 2. a (b + c) = (a b) + (a c) 3. (a + b) (a + c) = ( a b )+( a c ) 4. (a b) + (c d) =( a + b ) ( c + d ) 5. a b = a b (c + c ) 6. a+b = a+b + (c c ) Figure 4.1. Algebraic Manipulation with F = Function (2 or more variables)
4.8. Decimal Notation The algebraic manipulations are quite sufficient to do everything we want to do in switching theory. However, it is not efficient in the use of our time. As you probably noticed in the last problem set, the amount of "pencil pushing" is substantial with just four variables. The amount of effort doubles with the addition of each variable and becomes tedious with five or more variables. A 'shorthand' method for notation is therefore much to be desired. The method that has evolved is called decimal notation. The decimal notation used in this text is directly related to the canonical terms and to the areas on the Venn Diagram (or Karnaugh Maps to be developed shortly).

40

Chapter 4: Switching Theory for Combinational Circuits

4.9. Decimal Notation - Canonical Product Terms Theorem 4.5a: A canonical product term represents an undivided area on a Venn Diagram. The proof is very simple: there are no other literals to intersect with it, and the only way an area can be divided is by intersection. 36. Minterm: A canonical product term is also called a minterm (because it represents a minimum (undivided) area on the Venn Diagram). Theorem 4.6a: Every undivided area on a Venn Diagram is represented by a minterm. Theorem 4.7a: The union of all minterms forms the universe. For example, for a 3 variable system, 1 = ( x + x) ( y +y) ( z + z) 1 = x y z + x y z + x y + x yz + x y z + x y z + xy z + xyz. In switching theory, each minterm is a mini-function. If we give each of these a "name" that is shorter than the algebraic term, a "shorthand" can be developed. We may then visualize that any function f (x,y,z) can be constructed by or-ing minterms together. We will talk about the union or summing of minterms instead of saying or-ing. The selection of "names" for minterms is according to the following convention. For a minterm: a. Order the variables within the term. b. Replace each complimented literal with a 0. c. Replace each uncomplimented literal with a 1. d. The decimal equivalent of the resultant binary number will be taken as the name of the minterm. For a switching system S (x,y,z), the universe may be represented as 1 = (0,1,2,3,4,5,6,7) that may be stated as the sum of all the minterms or as the union of all the minterms. With a system with output variables x, y and z, the resultant use of or gates, and gates, and inverters can only result in linear combinations of the minterms (the linear combination in this case being the or operation.) Any function f(x,y,z) = (mi), where mi are selected from {0,1,2,3,4,5,6,7}. Normally, the mi are written in numerical order and the parentheses indicating an ordered-set are appropriate. However, they would not be necessary. Functions thus expressed will be said to be in decimal form. Problem 4.10. Convert the following functions to decimal form. a. f(x,y,z) = x y z + xyz b. f(x,y,z) = xy + y z c. f(x,y,z) = (x + y) Problem 4.11. Convert the following function to algebraic disjunctive canonical form. a. f(x,y,z) = (1,2,4,5) b. f(x,y,z) = (0,2,7) Theorem 4.8a: The disjunction (or-ing) of two functions results in the union of their minterms. Theorem 4.9a: The conjunction (and-ing) of two functions results in the intersection of their minterms. Theorem 4.10a: The complement of a function will have all of the minterms except those in the function.
z

41

Chapter 4: Switching Theory for Combinational Circuits

Problem 4.12. Find both the algebraic and decimal forms for the requested functions below given: f(x,y,z) = (1,2,4,6) and g(x,y,z) = (0,1,2,5) a. f + g b. f g c. f + g d. f g e. f + g 4.10. Decimal Notation - Canonical Sum Terms The convention used in this text for naming canonical sum terms is reversed from that for canonical product terms. This may seem strange at first, but the prudence of this decision will become evident as the manipulations are described. Consider f(x,y,z) = (3), then f(x,y,z) = x yz. The complement of f will be the union of all the other terms. However, by DeMorgan's Law, f (x,y,z) = x yz = x + y + z which is a canonical sum term. Each minterm is related directly to a canonical sum term through DeMorgan's Law. This means that the sum term covers the entire universe except for one undivided area. 37. A canonical sum term is also called a maxterm (because it represents the universe less one undivided area). Theorem 4.6b: The complement of every undivided area on a Venn Diagram is represented by a maxterm. Theorem 4.7b: The intersection of all maxterms is the null set. For example, for a three variable system: 0 = x + y + z = ( x + y + z ) ( x + y + z) ( x + y + z ) ( x + y + z) = (x + y + z ) (x + y + z) (x + y + z ) (x + y + z)

In switching theory, each maxterm is a maxi-function. As we take the conjunction (and-ing) of one maxterm with another, the intersection of the two will result in the omission of the two associated minterm areas from the resultant function. If we take the conjunction of all the maxterms, then all minterm areas will be omitted and the resultant is f(x,y,z) = 0. Any area on the Venn Diagram may be obtained by starting with the Universe and then taking the conjunction of maxterms in such a way that the areas not in the function are removed - a pruning effect, so to speak. Philosophizing a bit farther, we see that what we are actually doing is accumulating the zeros of the function. As we accumulate maxterms through the conjunction process, we are actually building the zeros of the expression until there are no more zeros. This phenomena is a direct result of duality. 38. The convention for naming maxterms is as follows: a. Order the variables within the term. b. Replace every uncomplimented literal with a 0. c. Replace every complemented literal with a 1. d. The decimal equivalent of the resultant binary number is the name of the maxterm. The consequences of this naming convention are as follows: 39. The name of a maxterm is the same as the name of the minterm representing the area it omits from the universe.
42

Chapter 4: Switching Theory for Combinational Circuits

40. The cell or area on the map represented by the name of the maxterm is a zero of the function. Consider now a system S(x,y,z). The Universe contains the minterm set {0,1,2,3,4,5,6,7}. Any function will contain a subset of the Universe as its set of 1's. For any complete function, those cells which are not 1's must be 0's. The complement of the minterm set will be the maxterm set. Function f(x,y,z) = (0,1,2) will have a complement f (x,y,z) = (3,4,5,6,7). But the 1's of f are 0's of f, so we may also build the function as the intersection or product of maxterms to put 0's in the appropriate cells; thus f(x,y,z) = (3,4,5,6,7). Problem 4.13. Convert the following functions to decimal product form. a. f(x,y,z) = ( x + y + z ) (x+y+z) b. f(x,y,z) = (x + y)( y + z) c. f(x,y,z) = (xy) Problem 4.14. Convert the following function to algebraic conjunctive canonical form. a. f(x,y,z) = (1,2,4,5) b. f(x,y,z) = (0,2,7) We now continue with more theorems. Theorem 4.8b: The conjunction (and-ing) of two functions results in the union of their maxterms. Theorem 4.9b: The disjunction (or-ing) of two functions results in the intersection of their maxterms. Theorem 4.10b: The complement of a function will have all of the maxterms except those in the function. Problem 4.15. Find both the algebraic and decimal forms for the requested functions below given f(x,y,z) = (1,2,4,6) and g(x,y,z) = (0,1,2,5) a. f + g b. f g c. f + g d. f g e. f + g Figure 4.2 is a pictorial representation of the manipulations available in decimal notation. Notice that all manipulations are very simple and that (now) direct paths exist between dcf versions of f and f and between ccf versions of f and f .

43

Chapter 4: Switching Theory for Combinational Circuits

dcf

1,3

1,2

ccf

1 1

3 3

2 2

1 1

dcf F 1,3 1,2

ccf

Note: Conventions = x y z = 7 and x + y + z = 7 Canonical Forms: f = ( ) and f = ( ) 1. Complementation of minterm or maxterm set. 2. Change to 3. Change to Figure 4.2. Decimal Manipulation: F = Function (2 or more variables) We may use set theory for discussion purposes with regard to minterms and maxterms. The input variables of a switching system determine the domain of its switching functions. For an n variable system, the domain may be represented by the set of numbers {0,1,2,. . . . . , 2n-1} that represent the Universe of its minterm and maxterm sets, denoted U. If f = (m) where m is the subset of U, then f = ( m ) also f = ( m ) and f = (m). If there are two functions in the system, and f1 = (m1) and f2 = (m2), then, f1 + f2 = (m1 U m2) f1 f2 = (m1 I m2) (ones accumulate with union of functions) (ones diminish with intersection of functions)

and if M1 = m1 , and M2 = m 2 , then (zeros accumulate with intersection) f1 f2 = (M1 U M2) f1 + f2 = (M1 I M2) (zeros diminish with union of functions) These effects may be stated in theorem form as follows: Theorem 4.8a: The disjunction (or-ing) of two functions results in the union of their minterm sets.Theorem 4.8b: The conjunction (and-ing) of two functions results in the union of their maxterm sets.
44

Chapter 4: Switching Theory for Combinational Circuits

Theorem 4.9a: The conjunction (and-ing) of two functions results in the intersection of their minterm sets. Theorem 4.9b: The disjunction (or-ing) of two functions results in the intersection of their maxterm sets. Theorem 4.10: The complementing of a function results in the complementation of its minterm and maxterm sets. Problem 4.16. Given f1 = (3,4,6,7) and f2 = (0,1,3,6), fill in the following: Hint: The sign implies a focus on function 1's and the sign implies a focus on 0's. ) b. f1 = ( ) a. f1=( c. f2=( ) d. f2 = ( ) ) f. f 1 = ( ) e. f 1 =( g i. j.

f 2 =( f1 f2=( f1+ f2=(

) ) ) )

h.

f2 = (

k. f1+ f2 = (

l. f 1 f2 = ( ) Problem 4.17. Circle the appropriate words in parentheses. a. The ones of the function f1 f 2 are the same as the (union, intersection) of the (ones, zeros) of f1 and the (ones, zeros) of f2. b. The zeros of the function f1+ f 2 are the same as the (union, intersection) of the (ones, zeros) of f1 and the (ones, zeros) of f 2 . c. The zeros of the complement of the function f1 f 2 are the same as the (union, intersection) of the (ones, zeros) of f1 and the (ones, zeros) of f 2 . 4.11. Karnaugh Maps In order to take advantage of the Venn Diagram concepts when more than three variables are involved, Veitch and Karnaugh both developed diagrams for visualizing the domain of switching functions. The Karnaugh Map has advantages over the Veitch Diagram because it emphasizes "adjacencies" (a concept to be discussed later in the chapter), and thus the Karnaugh Map will be used in this text. The Karnaugh Map is also easier to scan than the Venn Diagrams - even for three variables. From this point on, only Karnaugh Maps (also known as K-Maps) will be used exclusively. There are several variations of Karnaugh Maps in existence. The differences lie in whether the cells are numbered vertically or horizontally and in the method of extending the map's notation to more variables. The Karnaugh Maps to be used in this text are shown in Figures 4.3 and 4.4 The method of extension to additional variables in these maps is particularly simple in that the highest order bits representing the variables are always along the top edge and the lowest order bits are always along the vertical edge. This results in a vertical numbering scheme that will change as the map is extended vertically. However, the advantages over all uses appear to outweigh the disadvantages. Also, the development rule will be to make a square if possible, otherwise to extend the map horizontally. The cells in the Karnaugh maps in Figures 4.3 and 4.4 have been numbered with the decimal equivalent of the minterm they represent. The number associated with each cell is the decimal equivalent of the binary number formed by appending the vertical (row) binary
45

Chapter 4: Switching Theory for Combinational Circuits

bits to the horizontal (column) binary bits. The order of the binary bits, both vertically and horizontally, is by a coding scheme (due to Gray) that causes the binary number of each cell to differ from its adjacent cells in only 1 bit location. The sequencing of the numbers follows an order that can be described as a reflection in a mirror placed at powers of two positions either vertically or horizontally. The number in each cell therefore represents the undivided area either included by the corresponding minterm or excluded by the corresponding maxterm. There are basically two ways of working with Karnaugh Maps. The first is used when working with the canonical terms in decimal notation. The function is represented by either the set of minterms or the set of maxterms. If the function is represented by the minterms as, for example, f(x,y,z) = (1,3,4,6,7), then we can just place ones in those cell numbers contained in the parentheses and zeros in all remaining cells. If the function is represented in maxterm form, for example f(x,y,z) = (0,2,5), then we place zeros in those cells that are in the parentheses and ones in all other cells. a 0 ab 1 b 00 01 11 10 c a 0 1 0 0 2 0 0 2 6 4 0 0 1 1 1 3 1 1 3 7 5
abc 000 001 011 010 110 111 101 100 de 00 0 4 12 8 24 28 20 16 01 1 5 13 9 25 29 21 17 11 3 10 2 7 6 15 11 27 31 23 19 14 10 26 30 22 18

ab cd 00 01 11 10

00 0 1 3 2

01 11 10 4 12 8 5 13 9 7 15 11 6 14 10

abc 000 001 011 010 110 111 101 def 000 0 8 24 16 48 56 40 001 1 9 25 17 49 57 41 011 3 11 27 19 51 59 43 010 110 111 101 100 2 6 7 5 4

100

32 33 35

10 26 18 50 58 42 34 14 30 22 54 62 46 38 15 31 23 55 63 47 39 13 29 21 53 61 45 37 12 28 20 52 60 44 36

Figure 4.3. Karnaugh Maps for up to Six Variables with Variable Legends

46

Chapter 4: Switching Theory for Combinational Circuits

0 1 0 0 1

a 0 1 b 0 0 2 1 1 3

ab c

00 01 11 10 0 0 2 6 4 1 1 3 7 5

ab cd

00 01 11 10 00 0 4 12 8 01 1 5 13 9 11 3 7 15 11 10 2 6 14 10

abc 000 001 011 010 110 111 101 100 de 00 0 4 12 8 24 28 20 16 01 1 5 13 9 25 29 21 17 11 3 10 2 7 6 15 11 27 31 23 19 14 10 26 30 22 18

abc 000 001 011 010 def 000 0 8 24 16 001 1 9 25 17 011 3 11 27 19 010 2 110 6 111 7 101 5 100 4

110 111 101 100

48 56 40 32 49 57 41 33 51 59 43 35

10 26 18 50 58 42 34 14 30 22 54 62 46 38 15 31 23 55 63 47 39 13 29 21 53 61 45 37 12 28 20 52 60 44 36

Figure 4.4. Karnaugh Maps for up to Six Variables with Cells Numbered

47

Chapter 4: Switching Theory for Combinational Circuits

Problem 4.18. Show the Karnaugh Map representations of the following functions. a. f(x,y,z) = (0,1,4,5,7) b. f(x,y,z) = (3,4,6,7) c. f(w,x,y,z) = (0,2,7,8,12,13,14) d. f(w,x,y,z) = (2,3,4,5,8,10,12,14) The other method is basically the same as the intuitive approach used with the Venn Diagram. If a function is given in an algebraic form, then you may want to put in the ones or zeros by scanning the intersections or unions represented by the terms. The zeros and ones that appear along the boundaries may be used as guides, or you can label the region covered by the variables as shown in Figure 4.4. If the function is given in disjunctive normal form, the map is scanned for the intersections represented by each product term and ones inserted accordingly. The function is thus built using the product terms as building blocks. When ones for all terms have been placed in the map, the remaining cells are filled with zeros. The function is therefore represented by the union of the ones over all the product terms. If the function is given in conjunctive normal form, the map is scanned, mentally developing the union represented by each sum term. After the union for each sum term has been developed, zeros are placed in those cells not covered by the union. In this way, the function is built using sum terms as building blocks. When the zeros associated with all sum terms have been placed in the map, the remaining cells are filled with ones. Thus, the ones of the function are represented as the intersection of all the sum terms. Problem 4.19. Place ones and zeros in Karnaugh Maps to represent the following functions. a. f1(x,y,z) = x y + xz + x y z b. f2(x,y,z) = (x+ y ) (x+z) (x+ y +z) c. f3(w,x,y,z) = wx y + w xz + x yz + y z d. f4(w,x,y,z) = (w+x+ y ) ( w +x+z) ( x +y+z) ( y + z ) e. Show the intersection of the functions f1f2 f. Show the union of the functions f1+f2 g. Show the intersection of the functions f3f4 h. Show the union of the functions f3+f4 Problem 4.20. Fill in the values, given the functions in the previous problem. ) = ( ) a. f1 = ( b. f2 = ( ) = ( ) c. f3 = ( ) = ( ) d. f4 = ( ) = ( ) e. f1f2 = ( ) = ( ) f. f1+f2 = ( ) = ( ) g. f3f4 = ( ) = ( ) h. f3+f4 = ( ) = ( )

48

Chapter 4: Switching Theory for Combinational Circuits

4.12. N-Cubes Another representation of the domain of a switching function is found in the concept of N-Cubes. This representation is particularly enlightening in the discussion of errorcorrecting codes. Consider the picture of a 3-dimensional cube in Figure 4.5. Basically, the vertices of the cube represent the domain. The edges of the cube represent connections between "adjacent" codes. That is to say that, if one and only one variable makes a change in state, then we consider the system to have moved from one vertex to another along the edge connecting the two. y 011 111

010 z 001

110 101 x

000 100 Figure 4.5. 3-Cube

If we develop a binary code to represent exactly four items f= {a,b,c,d} then we might select a two variable system (x,y) as shown in the first set of Figure 4.6. f x y f x y z f x y z a 0 0 a 0 0 0 a 0 0 0 b 0 1 b 1 1 0 b 1 0 1 c 1 0 c 0 1 1 c 1 1 0 d 1 1 d 1 0 1 d 0 1 1 Figure 4.6. Methods of Coding {a,b,c,d} With such a code, however, there is no such thing as an invalid code. If one bit is changed in transmitting the code from one device to another, the wrong code is received and there is no way of knowing that an error has occurred. If three bits are used as shown in the other two code options, then it will be observed that vertices of the 3-cube have been selected that are exactly two edges from any other valid vertex. This means that the change in any single variable during transmission will result in an invalid code. The invalid code can be detected as such and retransmission requested. If a double error occurs, it will still go undetected. However, if the probability of a single error is extremely small, then the probability of two errors occurring simultaneously is essentially the probability of a single error squared. If we want to maintain the same ordering within the codes as in the two variable system, then the code on the right would be used. This particular option is called even parity and can be stated in the following way. "An additional bit, called a parity bit, is added to the code. The added bit takes on a value which will make the total number of bits in a valid code an even number." Odd parity may also be used, the added bit taking on the value which makes the total number of bits in a valid code an odd number. Odd parity simply represents the selection of the "other" set of vertices, again selecting the vertices such that the original values of x or y are identical to those selected in the (x,y) system. Consider now the need to transmit just two elements {a,b}. If the vertices 000 and 111 are chosen in the system (x,y,z) it is seen that three edges exist between valid vertices. This means that the existence of two errors can now be detected. Further, an algorithm can be set up that would simply write a log entry into the user log whenever an error is detected,
49

Chapter 4: Switching Theory for Combinational Circuits

but accept the transmission and correct the code to the nearest valid vertex. Such a code is said to provide double error detection and single error correction. Graphical presentation of N-cubes above order 3 leaves quite a bit to be desired; however, the concepts of edges and vertices still apply. In an N-dimensional system, there will be 2n vertices and each vertex will have N edges leading to N adjacent vertices. If we return to the Karnaugh Map, we see that the maximum number of adjacent cells in our two-dimensional representation will be four cells. The Karnaugh map is constructed in such a way that the adjacent vertices of four-dimensions will always appear as adjacent cells. To find the other adjacent vertices when more than four variables are present, we must 'look in the mirror' for the cell's reflection. It is also a property of the Karnaugh map that the top row of cells is adjacent to the bottom row and that the left column of cells is adjacent to the right column, both in an 'end around' sense. Coding theory is an extensive subject. This section provides a simple view of coding. 4.13. Tie Sets and Cut Sets When working with analog signal switches, or with relays, there are two other concepts of substantial value. These are the concepts of tie sets and cut sets. Consider the diagram in Figure 4.7a. If we consider the path through the circuit following the uppermost line, then we see that the product term x y will form one term of a dnf functional representation. This path represents a tie in forming circuit continuity and the variables involved in the tie are the tie-set. Any function in dnf can be represented as the disjunction of all tie-sets. The term x y represents a tie and is therefore a member of that set. The diagram in Figure 4.7a is not a circuit that has been constructed from a dnf expression. The dnf form for the circuit, however, can be obtained by finding the set of tie-sets. To do this, we need an algorithm that will be assured of finding all possible ties sets. This is the "keep to the left" algorithm frequently used in topological searches. In principal, we start at the left node and proceed forward to the right node, always taking the left-most path. Upon reaching the goal node, we follow the next left-most path that has not yet been traversed, seeking the goal node (always keeping to the left). When there are no more paths to be taken from a node (the next left-most path is the one from the previous node), then we go back to the previous node and the search continues, looking for the next left-most path from that node. No path may contain any loops, so the algorithm continues. In forming the path, we come to a switch that is the complement of one already in the path, then that path will not be a valid path (x x = 0) and so we go back to the previous node to continue. The switches that form a series circuit along each of the paths found from the beginning node to the ending node are tie-sets that constitute a product term. By now the student will have gained a certain appreciation for the concept of duality. Of course, there is a dual to the concept just presented. The dual is the concept of cut sets. Consider the diagram in Figure 4.7b. In this circuit, note that continuity will be broken if both switch x is open and switch y is open. We can say that we are guaranteed a zero of transmission under the condition xy. Any such condition that will assure an open circuit produces a cut-set. We will define a cut-set for a circuit as a set of variables that will insure a zero of transmission. The best way to visualize the process is to think in terms of forming the complement of f in dnf. We will see in a few moments, however, that we may omit this step (thanks to DeMorgan's Law)

50

Chapter 4: Switching Theory for Combinational Circuits

and write the function directly in cnf, which is really our goal as opposed to that of just finding the cut-set.
X x y X z X y a. X w X x X z X y c. x X w y X x X z X y e. x w X y z X z w X x x y z b. X y X z z X x d. X w X y X x x y

w x

Figure 4.7. Analog Switching Circuits The process for assembling the cut-sets of a circuit requires the construction of the topological inverse of the circuit with respect to the nodes and links. This is accomplished by placing one node above the circuit, one node below the circuit and a node inside each of the loops, and then constructing a path from each node (of the inverse graph) to every other node that can be reached, passing through exactly one contact. The problem is then solved using essentially the same algorithm used in finding the tie-sets. The only difference is that if we are writing the dnf expression for f , which is called the hindrance function for the circuit, then we are establishing the equations of product terms that guarantee that the circuit will not have continuity. We must place in the product terms the complement of the switch names in the path. Having collected all the tie-sets for f , we then complement f and obtain f in cnf. We note that, even more simply, we could have written the cnf form for f directly from the inverted circuit by using the names of the switches (not complementing them as we had to do when writing the hindrance function). It is occasionally desirable to redraw the circuit in a form called a graph. The circuit is represented as nodes with the branches connecting the nodes. This makes the connections clearer and the paths easier to follow. The graph of Figure 4.7a. and its inverse are shown in Figure 4.8
51

Chapter 4: Switching Theory for Combinational Circuits

x X x z X z X y

By tie-sets: f = xy+xz+xy+zy By cut-sets: f = (z+x+y) (y+z+x)

Figure 4.8. Figure 4.7a Drawn as a Graph with its Topological Inverse
Problem 4.21. Find the dnf and cnf forms for the circuits in Figure 4.7 using tie sets and cut sets, respectively. 4.14. Some Aspects of Circuit Design The design of circuits normally begins with a verbal specification as to what the circuit is suppose to do. The first step for the circuit designer is to convert this word problem into an equivalent mathematical model. There are generally many circuits that could be built to meet the specifications, but some would be more economical to build, or more reliable in operation than others. The topics with regard to economics and reliability will be discussed in the next chapter. In the remainder of this chapter, the concern will be specifying the mathematical models. The process of modeling requires the generation of a function that relates to the system being designed. Since we have already discussed the relationships between electrical signaling systems and the Boolean Algebra, no more will be said on that subject. Instead we turn now to the concepts of coverage. The process of design requires that we meet the original specifications for the circuit. Although one function may cover another function in either the 1's or 0's sense without an equivalence relation, the concern now turns to functions that cover both the 0's and 1's. If the functions are complete, then the equivalence relations exists. 41. If an expression defines a function that covers both the 0's and 1's of a specified function, then it is said that the expression represents a cover for the specified function. Note that when used as a noun, the word cover implies coverage of both 0's and 1's. 41.a. If an equivalence relation exists between a function defined by a dnf expression and a specified function, then the expression is said to be a dnf cover for the specified function. 41.b. If an equivalence relation exists between a function defined by a cnf expression and a specified function, then the expression is said to be a cnf cover for the specified function. During the design phase for a circuit, there are frequently input states for which the function is not specified. These input states are referred to as "don't cares" and the output is represented by a "+" in the Table of Combinations and in the Karnaugh Map. Since the complete function may have either a 1 value or a 0 value for each don't care input state, there
52

Chapter 4: Switching Theory for Combinational Circuits

will be more than one complete function that will cover both the 0's and 1's of the specified function. 42. Two different complete functions which cover the 0's and 1's of a specified incomplete function are said to be equivalent with respect to the specified incomplete function. Each function is said to be an equivalent cover of the incompletely specified function. Generally no distinction is made between functions for which an equivalence relation exists and functions that are equivalent with respect to the specifications. In the material that follows, the term equivalent will be dropped as an adjective to cover unless a particular point is being made. 4.15. Specifying Incomplete Functions The best way to approach the specification of incomplete functions is to consider the don't care input states as a specification of a don't care function. We can then develop precise definitions. 43. The don't care function with respect to a specified function is a complete function over all possible input conditions containing 1's where the output is not specified, and 0's everywhere else. The don't care function may be defined in any of the forms previously discussed. Almost universally, however, we are interested in the dcf form and, in particular, the decimal dcf form. For example, if there is no output specification for the cells represented by decimal cells {8,9,13}, then the don't care function becomes dc = (8, 9, 13). Of course, it could be written in product form, dc = (0,1,2,3,4,5,6,7,10,11,12,14,15). However, it would almost always be converted immediately to dcf form because it is the individual don't care cells (the 1's of the don't care function) that are of concern. We could say that the product form brings into evidence the cells for which the function is specified (or the "do care" cells). Consider a function f(x,y,z) where the input variables can never all be low or all be high. Consider also that the function must be one for x y z, x yz, and xy z . The forms for expressing the function are: f(x,y,z) = (1,3,6) with dc = (0,7) f(x,y,z) = (2,4,5) with dc = (0,7) In this case, there would be four complete functions that are equivalent to the incompletely specified function. These complete functions could be specified in either decimal dcf or decimal ccf. The dcf decimal forms for the complete functions would be (1,3,6), (0,1,3,6), (1,3,6,7) and (0,1,3,6,7). (Of course, with complete functions there is no don't care function.) Since we can never guarantee that the don't care cells won't appear in the parentheses for the 1's or 0's, there must be an argument solver statement. It is this: The specification of the don't care function always takes precedence over the 1's or 0's specifications. The question also arises as to whether don't cares are implicants or implicates. It is generally advantageous to consider them as both implicants and implicates. When working with dnf or dcf, they would be considered as implicants, and when working with cnf or ccf, they would be considered as implicates. They are never considered as cells that must be covered, however, and any implicant or implicate which is made up entirely of don't cares would never be included in building a function. The implicant and implicate are, therefore, redefined in the following way in order to cover these cases. 44.a An implicant is a product term that does not destroy the 0's cover.
53

Chapter 4: Switching Theory for Combinational Circuits

44.b. An implicate is a sum term that does not destroy the 1's cover. Note: Once a circuit has been built to meet the specifications, the function describing the circuit will be a complete function. Problem 4.22. How many different complete functions would be equivalent to the function f(x,y,z) = (2,4,5) with dc = (2,4,6,7)? Write them in decimal dcf and ccf. Also, write the incomplete specified function in the two recommended (desired) forms. Problem 4.23. How many different complete functions would be equivalent to the following functions? a. f(w,x,y,z) = (1,3,5,7,10,12,14), dc = (0,2,13,15) b. f(w,x,y,z) = (1,3,5,7,10,12,14), dc = (0,2,13,15) c. f(w,x,y,z) = (0,2,5,7,10,12,14), dc = (0,2,13,15) Problem 4.24. Write the decimal forms for all the complete functions that would be equivalent with respect to the functions specified in Problem 4.20a and 4.20b. Write 4.20c in both of the recommended (desirable) forms. Determination of the don't care function is an important part of converting word problems to switching circuit formulae. To do this, we must examine the domain of the switching function (regardless of the words specifying the requested action) to see if there are elements of the domain that can never occur, and if so, these elements are don't care elements. Problem 4.25 through 4.30. Develop the algebraic specifications for the following situations. Put the functions in decimal form, draw the Karnaugh Map and fill it in and write an algebraic expression, trying to obtain an expression with as few literals as possible. Note that in writing specifications, when we request an action to occur for a particular set of conditions, it can be assumed that the action should occur if and only if those conditions exist. In these problems, if the variable names are not given or implied, determine and state the name of each variable and the action that it represents. Problem 4.25. A circuit for an automobile is to detect the presence of a passenger without a seat belt fastened. There will be two pressure switches, one for each passenger seat that will close when a passenger is seated. Each seat belt will have a normally closed switch that will open when the seat belt is fastened. Let the pressure switches be p1 and p2 and the seat belt switches be s1 and s2. Show the circuit diagram using detached contacts. Problem 4.26. The alarm buzzer in an automobile is to sound if a. The ignition switch (I) is on, and: 1. There is a passenger without a seat belt fastened (P) and the ignition has been on for less than 20 seconds (T), or 2. A door is not closed (let D be a door is open), or b. The ignition switch is off and a door is open with the lights (L) being on. Show the circuit diagram using detached contacts. Problem 4.27. A circuit has four inputs and one output. The inputs are: w: The temperature is above 72 degrees x: The temperature is below 62 degrees y: The time is daytime z: It is a weekday. The output is to be high during the day if the temperature is either above 72 degrees or below 62 degrees, except on weekends. During the night, on weekdays, the output is to be high only if the temperature is above 72 degrees
54

Chapter 4: Switching Theory for Combinational Circuits

. On a weekend, the output is to be high if the daytime time temperature is above 72 degrees or the night-time temperature is below 62 degrees. Otherwise, the output is to be low. (Do not forget the don't cares.) Problem 4.28. A control board has four switches, numbered 0 through 3, only one of which can be turned on at one time. The condition of these switches (off or on) is to be monitored at another location some distance away. Design a gate circuit (dnf) to light two lamps so that condition of the lamps represents the binary number of the switch that is closed (00 represents switch 0 being closed, etc.) Problem 4.29. Design a gate circuit (dnf) for Problem 4.28 where three lights would be used and an odd number of lights would always be on. The right two lights are to represent the binary number of the switch which is on. Problem 4.30. Design a gate circuit (dnf) that will detect whenever there is an even number of lights on in the previous problem, thus indicating an error in the circuit. 4.16. Encoders The circuits for Problems 4.28 and 4.29 belong to a class of circuits called encoders. The circuit of Problem 4.28 would be called a 4-to-2 encoder. 2n-to-n encoders are used a great deal when error checking is not essential. Problem 4.29 represents an encoder with odd parity and problem 4.30 requests a parity checking circuit. There are also mechanical to electrical encoders that provide encoding for transitional and rotational monitors. The standard binary sequence in continuous variable systems is generally too noisy (consider the error that occurs if an encoder moves from 0111 to 1000 and the most significant bit changes a little too soon). Encoders that encode to a Gray Code are frequently used. Binary encoders can be constructed that are noise free, but they do require digital circuitry to prevent nonadjacent codes from appearing. 4.17. Decoders A decoder reverses the process of an encoder, and n to 2n decoders are frequently used. Problem 4.31. Reconsider Problem 4.28, assuming that at the remote location, there is to be a relay that closes for each individual switch. Design a decoder (dnf gate circuit) that will convert the signal on the two wires to operate the four relays. The problems above have been designed to introduce the student to encoders and decoders and also to promote familiarity with the Karnaugh Map for design purposes. Actually, encoders and decoders are rather special circuits and are fairly easy to design n without the use of algebra or Karnaugh Maps. The n to 2 decoders are also called "full decoders" because they can output all possible combinations. There is no way to simplify the circuits involved; hence, our algebra and Karnaugh Maps are of little use. Problem 4.32. Let x and y be the low order bits of two binary numbers. Design a gate circuit (dnf) that will have two inputs, x and y and two outputs, z and c, where z will be the low order bit of the sum of the binary numbers, and c will be a carry bit that can be used as an input to the circuit that will add the next bits. Draw gate circuit realization. Problem 4.33. Now let x and y be the next bits of two binary numbers. Design a circuit that will take x and y and the c output from the previous stage and produce two outputs, z and c as in Problem 4.29 for the following stage. This circuit is called a "full adder" whereas the circuit in the previous problem is frequently referred to as a "half adder." Draw dnf gate circuit realizations.
55

Chapter 4: Switching Theory for Combinational Circuits

4.18. Additional Problems for Chapter 4 Problem 4.34. A circuit is to be designed that has four binary input signals, (w,x,y,z) and one binary output signal, f. a. If there are no constraints on the signals, what is the size of the domain of f(w,x,y,z)? b. If signals x and y can never be high at the same time, what is the size of the domain? c. For case b above, which input conditions would be designed as don't cares. Problem 4.35. A circuit has eight input wires that carry signals in the form of characters. a. If there are no constraints on the input signals, how many different characters can be represented? b. If a character only occurs if exactly three wires are high, how many characters are represented? Problem 4.36. A combinational switching circuit has four inputs and one output. a. For how many different functions could the circuit possibly have been designed (including incomplete functions) ? b. How many different functions of four variables could exist at the end of the construction phase? c. After testing for all possible combinations of three input voltages with the other input held low, how many different functions could still lie in the set of functions it might generate? Problem 4.37. Given f(v,w,x,y,z) a. Given that x y cannot occur on the input, how many input states will be unspecified? b. Given that w x z cannot occur on the input, how many (total) input states would be unspecified if this were the only input constraint? c. How many total input states will be unspecified if both terms x y and w x y represent input constraints? Problem 4.38. a. How many ordered pairs are there in the set describing a complete switching function of six variables? b. How many different complete functions can be formed with six input variables? c. How many total (complete or incomplete) switching functions? d. How many incomplete switching functions? e. How many different domain sets can there be for a function of six variables? f. How many different domain sets can there be for a complete function of six variables. g. How many domain elements in a six variable system can have exactly four variables high? Problem 4.39. A system is being designed around devices that have three stable states. Assuming any output of a circuit would also have three stable states: Given a system with four input variables. a. What is the size of the domain for a complete function? b. How many complete functions can there be? c. How many total (complete plus incomplete) functions would there be? d. How many incomplete functions would there be? Problem 4.40. Convert the following expressions to dnf without using DeMorgan's Law. a. xy + y ( x +z) b. (xy + z )( x + yz) c. (x+y)+z d. (x+yz)(x y + z )
56

Chapter 4: Switching Theory for Combinational Circuits

e. x( y +z) + z x f. (x + y z ) ( x + z) g. w ( y +z)(xy + z) h. w( x + y )(xy+z) i. wx + y z( w + x ) j. (v (w x +y z )+ w )z k. (u + v (w + x ))(y+ z ( w +v)) Problem 4.41. Convert the functions in Problem 4.40 to cnf. Problem 4.42. Convert the following expressions in dnf to dcf. a. x + yz b. wx + w x c. w x y + w y z + x y d. vw + w x + yz e. uvw + w xy +yz f. How many dcf terms are there in u+v+w+x+y+z? g. How many in its complement? Problem 4.43. Convert the following expressions to ccf. a. x(y+z) b. (w+x)( w + x ) c. (w+ x +y)( w y +z)( x +y) d. (v+w)( w +x)(y+z) e. (u+v+w)( w +x+y)(v+y+z) f. How many ccf terms would there be in uvwxyz? g. How many in its complement? Problem 4.44. Given a function in algebraic dnf form, what two ways could you follow to develop the cnf form for the function (use postulates like P2.a., etc.) a. One way: b. Another way: As above to develop f in dnf form: c. One way: d. Another way: Problem 4.45. How many product terms will be developed if the following is distributed out? a. f = ( w + x + y + z)(A + B + C)(D+ E + F ) b. f = (wx+ x y + w y )( w y + y z ) How many sum terms will be developed if the following is distributed out? c. f = s t uv + w x + y d. f = (wx+ x y + w y )( w y + y z ) Problem 4.46. Find dnf, dcf, cnf, and ccf expressions for the following and their complements. a. (x+ y ) z+ x z b. w(x+y)( y +z) c. w x + w (y+z) Problem 4.47. Using the decimal expansion theorem, convert the expressions in Problem 4.42 (a through e) to decimal dcf.
57

Chapter 4: Switching Theory for Combinational Circuits

Problem 4.48. Using the decimal expansion theorem, convert the expression in Problem 4.43 (a through e) to decimal ccf. Problem 4.49. Convert the following to algebraic forms: a. f(x,y,z) = (1,5,6) b. f(x,y,z) = (3,4,7) c. f(w,x,y,z) = (12,13,15) d. f(w,x,y,z) = (3,7,11,13) e. f(u,v,w,x,y,z) = (4,12,32,48,56) f. f(u,v,w,x,y,z) = (1,10,21,38,44,63) Problem 4.50. Given: F1 = (0,1,4,5,10,12,15) F2 = (2,3,5,6,8,10,12,14) Fill in the following: = ( ) a. f1 = (0,1,4,5,10,12,15) b. f2 = ( ) = (2,5,8,9,10,12)

c. f 1 = ( d. f 2 = ( e. f1 f 2 = ( f. f 1 + f2= (

) ) ) )

= ( = (

) )

) g. f 1 f 2 = ( ) h. f1+ f2 = ( Problem 4.51. Fill in the missing functions: = ( ) a. f1(w,x,y,z) = (0,1,4,5,8,10,12 ) b. f2(w,x,y,z) = ( ) = (1,2,5,8,9,10,12) c. f 1 (w,x,y,z) = ( ) = ( ) d. f 2 (w,x,y,z) = ( ) = ( ) ) e. f1 f2 = ( f. f1+ f2 = ( ) ) g. f1 f 2 = ( ) h. f1+ f2 = ( ) i. f1 + f 2 = ( Problem 4.52. Construct Karnaugh Maps for functions in Problem 4.42 (a through e). Problem 4.53. Construct Karnaugh Maps for functions in Problem 4.43 (a through e).

58

Chapter 4: Switching Theory for Combinational Circuits

Problem 4.54. Write dnf expressions for the functions in the Karnaugh Maps below. Problem 4.55. Write cnf expressions for the functions shown in the Karnaugh Maps below. Problem 4.56. Write decimal dcf expressions for the functions in the Karnaugh Maps below. Problem 4.57. Write decimal ccf expressions for the functions in the Karnaugh Maps below. xy xy xy 00 01 11 10 z 00 01 11 10 z 00 01 11 10 z 0 0 1 1 0 0 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 1 0 1 1 1 1 1 1

a. wx yz 00 01 11 10 00 1 1 1 01 11 10 0 1 1 1 1 0 0 1 1 0 1 0 0 d. wx 00 01 11 yz 00 1 0 1 01 0 0 0 11 1 0 1 10 1 1 1

b. c. vwx 000 001 011 010 110 111 101 yz 00 0 0 1 0 0 1 1 01 1 0 1 0 0 1 0 11 1 1 0 0 1 1 1 10 1 0 0 0 e. 1 1 0

100 0 0 1 0

10 1 0 1 1

vwx yz 000 001 011 010 110 111 101 100 00 0 0 1 1 0 1 1 0 01 1 1 0 0 0 1 1 0 11 10 1 1 1 0 0 1 0 1 1 1 1 1 1 0 0 0

g. f. Problem 4.58. If two functions are input to an and gate: a. The 1's of the result will be the (union, intersection) of the 1's of each function. b. The 0's of the result will be the (union, intersection) of the 0's of each function. Problem 4.59. Cross out the incorrect word in parenthesis. When and-ing two functions together, we can get the decimal dcf by taking the (intersection, union) of the (ones, zeros) of the functions. We can get the decimal ccf by taking the (intersection, union) of the (ones, zeros). To obtain the decimal ccf of f1 + f2, we can take the (intersection, union) of the (ones, zeros) of f1 and the (ones, zeros) of f2. Problem 4.60. True or False (assume complete functions) a. If two functions are and-ed, then the resultant function will have 1's for the intersection of the 1's and 0's for the union of the 0's. b. Given f1 + f2, there will be 1's for the union of 0's of f1 and 1's of f2 and 0's elsewhere. c. Given f1 f 2 , there will be 0's for the union of the 0's of f1 and the 1's of f2 and 1's elsewhere. Problem 4.61. Cross out the incorrect word in parentheses.

59

Chapter 4: Switching Theory for Combinational Circuits

a. The intersection of two functions is given in product form by the (union, interaction) of the functions' zeros. b. The union of two functions is given in sum form by taking the (union, intersection) of the functions' ones. c. Given g h = f, f can be considered as the (union, intersection) of the ones of g and h. r d. Given f = g +h, the product form of f can be considered as a (union, intersection) of the zeros, ones) of g and h . e. Given f = g h, the sum form of f can be considered as a (union, intersection) of the (zeros, ones) of g and h . Problem 4.62. Read the following statements all the way through, then circle the correct word in the set(s) of parentheses. a. The intersection of two functions is given in sum form by the (union, intersection) of the functions' (ones, zeros). b. The union of two functions is given in product form by taking the (union, intersection) of the functions' (ones, zeros). c. Given functions f and g: --also f and g . h = f g can be expressed in product form by taking the (union, intersection) of the zeros of (f, f ) and (g, g ). Problem 4.63. Circle the correct answers. a. A nand gate (f1 f2 ) = f1f2 will produce a function, the 1's of which will be the (union, intersection) of the (ones, zeros) of the input functions. b. A nor gate (f1 f2)= f1 + f 2 will produce a function, the 1's of which will be the (union, intersection) of the (ones, zeros) of the input functions. c. The zeros of A +B will consist of the (union, intersection) of the (ones, zeros) of A and the (ones, zeros) of B. Problem 4.64. Given f(v,w,x,y,z) a. How many different domains can there be for the don't care function? b. What is the size of the domain of the don't care function? c. How many different don't care functions could there be? d. Given xy cannot occur at the input, how many different decimal expressions for f could be given under the assumption that the don't care expression takes precedence (include dcf and ccf).

60

Chapter 4: Switching Theory for Combinational Circuits

Problem 4.65. Given the following Karnaugh Map, find dnf and cnf circuit realizations. (Try to do it with as few gates as possible.) abc 000 001 011 010 110 111 101 100 def 0 1 1 0 1 1 000 1 1

001 1 011 0 010 0 110 0 111 1 101 1 100 1

0 1 1 1 1 0 1

0 1 1 1 0 0 0

1 1 1 0 1 1 0

1 1 1 0 1 1 0

0 1 1 1 0 0 0

0 1 1 1 1 0 1

1 0 0 0 1 1 1

61

Chapter 5: Minimization of Combinational Circuit

5. Minimization of Combinational Circuits


There are three principle reasons for constructing circuits with as few components as possible. One reason is economics, one is reliability (in general, the circuit with the fewest components will be the most reliable), and the third reason is that testing is generally simpler if there are fewer components. The concept of minimization requires a cost function that can be used to compare circuit structures. In the early days of switching circuits, the criterion was the number of relay contacts or the number of armature circuits. Later, a major criterion was the number of diodes. With systems composed of discrete components, the number of gates or the number of gate inputs might be a reasonable criterion (at one point, the cost of wiring the gates was more heavily considered than the cost of the gate). Where reliability is of paramount concern, the literal count might be a reasonable measure. With integrated circuits, the real estate, or area required on a chip, may very well be the principal criterion. In most cases, there will probably be no criterion at all since a complete analysis gets very complicated. However, the principles are important, especially with respect to the multiple output circuits and hazard-free design to be discussed in the later part of the chapter. This text will consider the following criteria: Criterion 1: Number of Literals Criterion 2: Number of Terms Criterion 3: Number of Inputs to Gates. The third criterion is quite realistic in that it includes the number of connections required and, to a reasonable degree, the real estate required to set up the circuit. Also, in some of the general programmable devices available, there is a need to minimize the number of inputs to gates. The first two criteria are generally not as appropriate, but they give us optional criteria so that it can be seen how different criteria can lead to different circuit selections. The first criterion is indicative of the number of input connections and the second is essentially the number of gates. It will also be seen that the minimization processes do not, in general, yield a unique circuit. That is to say that there may be more than one circuit that results in the minimum cost. Under these circumstances, we could select any one of the minimum cost circuits, or perhaps consider other things such as the availability of terms for utilization in other circuits which were not considered in the original design process. There are two techniques available for minimizing functions: A Visual Method Using Karnaugh Maps A Tabular Method The Karnaugh map reduction technique resolves down to a pattern recognition problem and is most frequently considered as a single process. However, it will be presented here first as a two-step process to emphasize the two aspects of minimization. It will then be pointed out how you can, with experience, accomplish the equivalent of the two-step process by careful direct examination of the Karnaugh Map. In practice, the Karnaugh Map method is used almost exclusively when working with simple single output switching functions because it is easy and because a true minimum cost system is not aggressively pursued. The technique is subject to error because

62

Chapter 5: Minimization of Combinational Circuit

it relies on pattern recognition, and it is almost impossible to use (certainly time consuming and awkward at best) when working with multiple output systems. The tabular procedure is a bit more time consuming for simple circuits, but it is far less sensitive to human error, and it extends almost trivially to multiple output circuits. Also, being an algorithmic tabular technique, it can be programmed for computer use. The minimization processes to be discussed here are those involved with finding minimum cost dnf or cnf circuits. There is no way to know in advance whether a dnf circuit realization will be cheaper or whether a cnf circuit realization will be cheaper. Therefore, it becomes necessary to find a minimum cost circuit for each form and then select the cheapest between forms. 5.1. Basic Minimization Processes There are two aspects to minimization: Finding the best "building blocks" Finding the most economical coverage using the building blocks Since the circuits to be developed are dnf and cnf, the "building blocks" will be product terms and sum terms, respectively. The terms used for "best building blocks" will be prime implicants and prime implicates, respectively. 5.1a. Prime Implicant: A prime implicant is an implicant which subsumes no other implicant. Alternatively, a prime implicant is an implicant that is not covered by another implicant. 5.1b. Prime Implicate: A prime implicate is an implicate which subsumes no other implicate. Alternatively, a prime implicate is an implicate that is not covered by another prime implicate (covering zeros, of course). In finding the primes for incomplete functions, the don't cares will always be treated as 1's when working with dnf and as 0's when working with cnf. Once the primes (the "best building blocks") have been found, the next step is to find which set of these best building blocks allows a minimal cost cover of the original function. When realizing incomplete functions, it is never necessary to cover the don't cares, and they are ignored in the second step of the process.

63

Chapter 5: Minimization of Combinational Circuit

5.2. Finding the Primes from Karnaugh Maps Consider the function f(wxyz) = (0,3,5,7,8,9,11,15) shown in Figure 5-1.

wx yz 00 01 11 10 00 1 01 0 11 1 10 0 0 1 1 0 0 0 1 0 1 1 1 0

wx yz 00 01 11 10 00 0 0 0 0 01 1 11 1 10 0 1 0 0 1 1 0 0 1 0

f(w,x,y,z) = (1,3,5,11,13,15) f(w,x,y,z) = (0,3,5,7,8,9,11,15) Figure 5.1. Focus on 1-Cells Every single 1 cell is an implicant (represented by a canonical product term). Because of the way in which the Karnaugh Map is constructed, any two adjacent '1' cells will form a 2-cell implicant that will cover the 1-cell implicant. For example, cells (8,9) represent the minterms w x y z + w x y z. The w x y will factor out, giving w x y ( z +z) = w x y . Each 1-cell implicant will subsume the 2-cell implicant that covers it. Any cluster of four adjacent '1' cells will cover four pairs of 2-cell implicants. It is also a property of the Karnaugh Map that four adjacent cells represent a single term. For example, cells [3,7,11,15] represent: (3,7,11,15) = w x yz + w xyz + w x yz + wxyz = x yz( w +w) + xyz( w +w) = x yz + xyz = ( x +x)(yz) = yz. The 4-cell implicant is subsumed by each 1-cell implicant it covers and by each 2-cell implicant it covers. This process continues for each 2n cluster of '1' cells. A major part of the Karnaugh Map minimization process is the visual scanning of the map to find all Prime Implicants. To do this effectively and efficiently, an algorithm is needed which will assure that all primes will be found. Start in the upper left corner of the map and scan the map by increasing cell numbers until coming to the first '1' cell. Then examine adjacent cells (look in all directions) to see if any 2-cell implicants exist including this cell. If not, the cell represents a 1-cell prime implicant. If one or more '1' cells are adjacent to the cell, then the 1-cell implicant is not prime. If one or more 2-cell implicants exists, then each 2-cell implicant is examined in turn to see if a 4-cell implicant exists which covers the 2-cell implicant. If not, the 2-cell implicant is prime; otherwise it is not. This process continues, looking for an 8-cell implicant if a 4-cell implicant exists, etc. When the largest implicant is found covering that cell, scan to the next '1' cell (in numerical order), again looking for the largest implicant that covers it and continue until all cells have been covered and all prime implicants noted. Using this algorithm, you will notice that by "looking in every direction," you do not need to re-scan directions that have been covered in working with other cells. Also, the concept of "direction" must include all adjacencies. This means that for maps with more than four variables, the various reflections about the 2n boundaries must also be considered.

64

Chapter 5: Minimization of Combinational Circuit

Figure 5.2 shows some patterns of primes. Note that the right edge of a Karnaugh Map is adjacent to the left edge and also the top and bottom edges are adjacent.

wx yz 00 01 11 10 00 1 1 0 0 01 1 11 1 10 1
wx

wx yz 00 01 11 10 00 0 0 1 1 01 1 11 1 10 0 1 1 0 0 0 1 1 1 1

1 1 1 1 1 0

1 0
w+z

xz, wz, wx, wz

yz

00 01 11 0 1 1 0 0 1 1 0 1 0 0 1

yz

wx

00 01 11 10 1 0 0 1 0 1 1 1 1 1 1 0 1 1 0 0

00 1 01 0 11 0 10 1

00 01 11 10

w x z, x y z, w y z, x z, w x y, w y Figure 5.2. Some Patterns of Primes of Four Variables


Problem 5.1. Find all prime implicants for the functions given below: a. f(x,y,z) = (0,2,3,4,7) b. f(w,x,y,z) = (0,2,4,5,6,7,8,10,13,14,15) c. f(w,x,y,z) =(1,2,7,8,9,10,12,14) When all the prime implicants are found (these are the "best building blocks") the second step of selecting the most economical set of these building blocks is best accomplished by a Table of Primes showing which cells are covered by each prime. This table will have all the 1-cells of the function across the top and the prime implicants down the left. For each prime implicant, an "x" is placed in each column representing a cell covered by that prime. Table 5.1 shows the Table of Primes for Figure 5.1. Note that if a function has don't cares, the don't care cells are never placed in the coverage table.

x z, x z

65

Chapter 5: Minimization of Combinational Circuit

0 {0,8} {3,7,11,15 {5,7 } {8,9 } {9,11 } } X

Table of Primes 3 5 7 8 9
X X X X X X X X

11

15

Table 5.1. Table of Prime Implicants for Figure 5.1 It is now possible to scan this table to form a logical statement, called the Petrick Function (or p-function for short) as to which primes are necessary to cover the function. The p-function can be read "To form the function, we need - - -." For example, {0.8} is essential since it is the only prime available to cover cell 0. {3,7,11,15} is also essential since it is the only prime that covers cell 3. {5,7} is essential since it is the only prime to cover cell 5. 5.2. Essential Cell: A cell which is covered by only one prime is called an essential cell. 5.3. Essential Prime: A prime which contains an essential cell is called an essential prime. As a part of the "bookkeeping," circle the x in all columns that have only one x, as shown in Table 5.2, and place an * to the left of that prime to indicate that it is essential. Table of Primes 0 3 5 7 8 9 11 15
A* B* C* D E {0,8} {3,7,11,15} {5,7 } {8,9 } {9,11} X X X X X X X X X X X X

Table 5.2. Essential Primes (with *) for Figure 5.1 Consider the cell {0.8}. Since it is essential, we will select it as one of the essential terms. Once it has been selected, the entire row may be crossed off. However, using it will cause coverage of cell 8 as well. Since cells 0 and 8 are now covered, we cross off columns 0 and 8, and consider only the remaining rows and columns in the table. After we have done this for each essential cell and each essential prime, we can form the p-function which is a Boolean expression representing the fact that we need {0,8} and {3,7,11,15} and {5,7} and either {8,9} or {9,11} to cover the only remaining column for cell 9. p = {0,8}{3,7,11,15}{5,7} ( {8,9}+{9,11}) When we work with larger systems, it is more convenient to "name" primes with capital letters, as p = ABC(D+E). We will select between primes D and E based on the cost of construction.

66

Chapter 5: Minimization of Combinational Circuit

Consider now the Karnaugh Map in Figure 5.2, f = (1,3,5,11,13,15). The prime implicants are: {1,3}, {1,5}, {3,11}, {5,13}, {11,15}, {13,15} = A,B,C,D,E,F. Table of Primes 1 3 5 11 13 15 X {1,3} X A X X {1,5} B { 3,11} C X X X {5,13} X D X X {11,15} E X X {13,15} F Table 5.3. Table of Prime Implicants for Figure 5.2 In this case, there are no essential cells and no essential primes. The Petrick Function is generated from the statement of coverage: "To cover cell 1, we need either A or B and to cover cell 3, we need either A or C, and to cover cell 5, etc." p = (A+B) (A+C) (B+D) (C+E) (D+F) (E+F) To find the different circuits that can be constructed, we multiply out this expression. Generally, this is more easily done by pairing up terms, for example (A+B)(A+C) = A+BC. p = (A+BC) (D+BF) (E+CF) = ADE+ADCF+ABFE+ABFCF+BCDE+BCDCF+BCFE+BCF Each term of this dnf form of the Petrick function represents a collection of terms that will realize the circuit. In statement form it can be said "To form the function f, we need terms A and D and E, or we need terms A and D and C and F, or we need - - - etc." We may remove all subsuming terms. The question remains which of these is the most economical. We then look at the cost of the realizations. The cost of each term can be developed from the criteria. For example, term 1 is {1,3} = w x z which has three literals. Under Criterion 1, it has a cost of 3. Under Criterion 2, since it is a single term, it has a cost of 1 and under Criterion 3, it will have three inputs, and its output will be an input to a gate, and represents a cost of 4. In this example, each term will have the same cost and we can simply select the pterm with the fewest prime implicants. This would be ADE or BCF. Since we will work with the Table of Primes, look for ways to reduce the table before developing the p-function. The existence of essential primes allows us to cross off rows and columns that are covered and permits reduction in table size. There are other cases as well which will now be developed. Although there are many rules for reducing the Table of Primes, there are two more rules that are the most helpful and quite sufficient. The first is called column removal. The second is row removal. 5.2.1. Column Removal If two columns have x's in exactly the same rows or if one has fewer x's, but those which it has are in the same rows as those in the other column, then the column with the most x's can be crossed off. The reason is that the p-function sum term for the column with the most x's subsumes the term for the column with the fewest x's. For example, the terms might be p = (A+B) (A+B+E). The p-function is a logical statement and follows the rules of Boolean Algebra and so (A+B) (A+B+E) = (A+B). We may also view it in the following way. To cover Column 1 we must have Term A or Term B. To cover Column 2 we must have Term A or Term B or Term C. Since we must have Term A or B to cover Column 1,
67

Chapter 5: Minimization of Combinational Circuit

then whichever we choose will also suffice to cover Column 2. Therefore, we can ignore the coverage requirements for Column 2. We could wait and simply perform the subsuming operation after writing out the p-function. However, the removal of the column in the table might result in the ability to remove rows, resulting in a simpler p-function to evaluate. In summary: 5.4. Dominating Column: A column is said to dominate another column if it has x's in (at least) all of the rows that the other column has xs. 5.5. Column Removal: A dominating column can be removed without changing the effect of the search for a minimum cost realization. 5.2.2. Row Removal 5.6. Dominated Row: A row is said to be dominated by another row if that other row has xs in the same columns (at least). 5.7. Row Removal: If the term representing a dominated row has a cost equal to or greater than the term representing the row that dominates, then the (dominated) row may be removed. The reasoning behind the rule is based on the fact that the dominating term covers equal or more cells and, if its cost is less, there would be no reason to include the dominated term. If the cost is equal, then the removal of the row might remove some equal minimum cost realizations, but will not remove them all. If you want to view all minimum cost realizations, then only exercise row removal if the cost of the row to be removed is greater. The three rules for prime table reduction can be used more than once in the process of reducing a table. A good algorithm to follow is to apply the rules in the order they have been presented here, and then repeat them until no more reduction is possible. A table is said to be cyclic when no more reduction is possible. There is also a need for bookkeeping when reducing tables in order that others may follow the steps taken. It is recommended that a log be kept of each step. The procedure for reducing prime tables along with a recommended logging procedure is shown in Table 5.4. Table 5.4. Prime Implicate Table Reduction - Single Functions. I. Essential Prime Implicants (a column contains only one x) a. Circle the x. b. Place an * next to the implicant name to denote that it is essential. c. Place the implicant in the Petrick Function. d. Remove all columns covered by the essential implicant. e. Delete the row from the table. II. Column Removal a. Delete all columns that dominate other columns. III. Row Removal: Any row may be removed which is both: a. Dominated by another row and b. Of greater cost than that row (of equal or greater cost if we don't require all possible minimal forms). Note: In order to check previous work, it is necessary to develop a log of the activities and the reason for the activities. For example:

68

Chapter 5: Minimization of Combinational Circuit

1 C 1 *A B 4 C

2 C2

5 C

1 C4 X

1 C5 X

3 C6

Cost 3 4

X X X X X X

X X

4 3

5 ** D

1 5 p= AD
1. 2. 3. 4. 5. A is essential in Column C1 Column C2 dominates Column C3 Column C6 dominates Column C3 Row C is dominated by row D and costs more D is essential (secondarily in Column C3)

To see the action more clearly, the student should re-draw the unreduced coverage table and then execute (in proper order) all of the actions implied by each line in the legend. When complete, the student will appreciate the need for the legend. Problem 5.2. For each of the functions in Problem 5.1: a. Use Criterion 3 to find the minimum cost realizations. b. Use Criterion 2 to find the minimum cost realizations. c. Use Criterion 1 to find the minimum cost realizations. Knowing what the process of selecting primes entails, we now return to the Karnaugh Map to see if the process can be reasonably well accomplished without resorting to the tables. 5.3. Essential Primes on Karnaugh Maps If a prime implicant contains an essential cell, it is an essential prime. A simpler way of stating this is that if this is the only Prime implicant covering the cell being scanned, then it is essential. It may be helpful to shade in all the cells covered by each essential prime. Then the other cells in this shaded area do not need to be scanned since they will be covered by the essential prime. This is equivalent to "crossing off the columns" covered by an essential prime in a prime table reduction. The Karnaugh Map Method is simplified if one scans the entire map for essential primes, shades in the areas covered, and then looks for the most economical coverge of the remaining cells. 5.4. Most Economical Coverage on Karnaugh Maps The most economical cover of the cells remaining after the essential primes have been found is a function of the development of pattern recognition talents. Note that the more cells an implicant covers, the smaller the cost. The more cells that can be covered with an implicant, the better. However, especially with incompletely specified functions, the selection of the term with the greatest coverage is not always prudent since it may be covering other cells redundantly or unnecessarily.

69

Chapter 5: Minimization of Combinational Circuit

5.5. Minimization of Incompletely Specified Functions As mentioned previously, incomplete functions arise when either one or more input states cannot occur, or where an output can be either a 0 or 1 without affecting the circuit's intended operation. The input state may map into either a 0 or a 1, and the output condition is said to be a "don't care" condition. Don't cares are represented as + entries in Tables or Combinations or in the associated Karnaugh Maps. The principal effect of don't cares is in the Table of Primes. During the process of finding the prime implicants, the + cells are all considered as 1-cells. However, any cell which is a don't care cell does not need to be covered and is not placed in the Table of Primes. Therefore, a don't care cell cannot be an essential cell, nor is it necessary that it be covered. When working with a Karnaugh Map, attention is focused on covering the "1" cells, with the "+" cells used only to form (more economical) primes. Problem 5.3. Use the Karnaugh Map Method (without resorting to tables) to find a minimum cost (Criterion 3) dnf expression for the following functions: a. f(x,y,z) = (2,5,6,7) b. f(w,x,y,z) = (1,3,6,9,11,14) + dc = (7,12,13,15) c. f(w,x,y,z) = (1,3,5,7,12,13,15) + dc = (6,14) Problem 5.4. Find all Prime implicants for each of the functions in Problem 5.3 and use the Table of Primes and the procedures of Table 5.4 to obtain a minimum cost dnf expression for each. 5.6. Minimization of Conjunctive Forms With conjunctive forms, the minimization processes become the dual of the processes just described. The focus of attention is on the 0's instead of the 1's. Prime Implicates are those which do not subsume other implicates, or alternatively, those 0's that are not covered by another implicate. Don't cares are considered as 0's during the process of finding primes. The cost of an implicate by Criteria 1 and 3 will be smaller the more cells it covers. (All implicates cost the same under Criterion 2.) The Table of Primes is used to show coverage of 0-cells (don't care cells are never included, since they do not have to be covered). An essential cell will be a (0) cell that is covered by only one prime implicate, and a prime implicate that covers an essential cell is an essential prime implicate. In working with Karnaugh Maps, the process is to examine all 0-cells, looking for clusters of 2, 4, 8, etc., in exactly the same patterns that were searched for with respect to 1cells when minimizing disjunctive forms. Generally, the process is aided by scanning first for the essential cells and shading in those (0) cells covered by the associated essential prime implicates. The remaining (0) cells are then examined for a minimum cost cover. Problem 5.5. Use the Karnaugh Map Method (without resorting to tables) to find a minimum cost (Criterion 3) cnf expression for the functions given below. a. f(x,y,z) = (2,5,6,7) b. f(w,x,y,z) = (1,3,6,9,11,14) + dc = (7,12,13,15) c. f(w,x,y,z) = (1,3,5,7,12,13,15) + dc = (6,14) Problem 5.6. Find all of the prime implicates for each of the functions in Problem 5.5 and use the Table of Primes and the procedures of Table 5.4 to obtain a minimum cost cnf expression for each. 5.7. Minimization Using Tabular Techniques The Karnaugh Map Method covered previously is the method most often used when minimizing circuits for simple single output functions. However, for minimizing curcuits
70

Chapter 5: Minimization of Combinational Circuit

with multiple outputs, the Tagged Quine-McCluskey procedure for finding primes is essentially unchallenged. The Quine-McCluskey procedure also becomes very useful if the number of variables requires Karnaugh Maps that are so large that you may lose confidence in your ability to find all the primes. The Quine-McCluskey method, as originally developed, worked with algebraic forms and the binary notation for canonical terms. However, as the decimal notation became popular, it was obvious that the decimal notation substantially reduces the effort required. Only the decimal version is presented here. In order to minimize a function algebraically, we must first place the function in canonical form. It was shown in the previous chapter how the decimal equivalent values for canonical terms could be developed. We now observe that some of the algebraic manipulations are equivalent to simple numeric operations with the terms in decimal notation. The existence of these properties is easily seen by viewing the equivalent operation in binary. a. Let the bit location b of a variable in a canonical term be defined as the (decimal) position of the variable numbered from the right beginning with the right-most variable as 0, under the assumption that the variables are ordered in accordance with the domain specification. For example, for f(x,y,z), z will always be represented as occupying the 0 bit position, y as occupying the 1 bit position and x as occupying the 2 bit position. b Let a base number B for a disjunctive normal term be defined as the (decimal) number representing the canonical term if the missing variables are represented with uncomplemented literals (with 0's in their respective bit locations of the binary representation). For example, x y in f(x,y,z) would have base number given by 100 in binary or 4 in decimal. The term y z would have a base number of 001 in binary or 1 in decimal. Theorem 5.1. Expansion Theorem Number 2. Let B be the base number for a term missing a single variable in binary digit location b, then the decimal notation for the canonical terms covered will be {B,B+2b}. For example: Given f(x,y,z) = xy + y z + z , find the decimal equivalent. Note: 20 = 1 B(x y z ) = 110 = 6 xy = {6,7} Note 24 = 4 B( x y z ) = 001 = 1 y z = {1,5} z = {0,2,4,6} Note: 21= 2, 22 = 4 B( x y z ) = 000 = 0 In the latter case, the theorem was applied twice. A simple way of viewing this is to use a format similar to the following (note that all terms are functions of x,y,z): yz + z xy +

0 0 0 0 4+2+ +0+1 + + 0 1 4 4 2 {6,7} {1,5} {0,2,4,6} The term is replaced with a decimal notation showing the decimal weights assigned each variable when it is inserted in the term. The terms that are formed will include all possible combinations (sums) across the term.

71

Chapter 5: Minimization of Combinational Circuit

5.8. Index: Let the index of a term be the number of 1 bits in its binary representation. For example: x y z has an index of 0, x y z or x y z have indices of 1, and xyz has an index of 3. Theorem 5.2. Reduction Theorem Number 2. If two canonical terms differ in their index by 1 and the term with the larger index has a decimal notation that is greater by a power of 2, then the two terms will reduce algebraically to a term that has a base number equal to the term with the lower index. This theorem is basically the inverse of Theorem 5.1, and is also a restatement of Theorem 3.8 which is repeated here. Theorem 3.8a: ab + a b = b To prove this, consider any term t1 with a binary representation B1. Consider also a term t2 which has the binary representation B2 = B1 + 2 . In forming B2, a binary number with an index of 1 is added to B1. (The bit position of the 1 added is n.) B2 will be identical to B1 in all bit positions to the right of the nth bit. If the nth bit position of B1 is 0, then the corresponding bit position in B2 will be 1 with all remaining bit positions unchanged. B2 will have an index one greater than the index of B1. This is exactly the case for Theorem 3.8a and t1 + t2 can be reduced. If, however, the nth bit of B1 is a 1, then the nth bit position of B2 will be rippling a zero (with a one to carry). This means, at this point before completing the carry, that B2 has an index one less than B1. If the next bit of B1 is 0, then the carry will result in the next bit of B2 being a 1 with all remaining bit positions unchanged. The result is that B2 at this point has the same index as B1. If the next bit of B1 had been a 1, then the next bit of B2 would have been a 0, decreasing the relative index of B2 with respect to B1. Indeed, the index of B2 will remain the same until a 0 is encountered in B1 (while the index of B1 continues to increase). When the 0 is encountered, the index of B2 will increase by 1, and all remaining bits will remain unchanged. The following statement can be made: Theorem 5.3. If 2n is added to the binary representation of a term, the index of the resultant sum will be greater only if the binary representation has a 0 in the nth bit position. These theorems provide the basic tools to set up the decimal form of the QuineMcCluskey procedure for finding primes. 5.8. The Quine-McCluskey Procedure for Disjunctive Forms 1. If the functions are not in canonical form, expand them to decimal canonical form. Each ) = ( ) + dc = ( ). function should be expressed in the form: fi( 2. Let {ones} = Set of 1-cells, and {dc} = the set of don't care cells. The union of these sets is written in a column to the left in groups of increasing index, dividing each group with a line (to see the group boundaries). 3. Compare each cell with all cells in the group below (the index is 1 greater). If a cell number below is a power of 2 greater, then check off both cells and place the pair in parentheses to the right along with the power of 2 by which they differ. Note: The smaller number represents the base number B of the normal term, with the bit removed for the bit position representing that power of 2. This is done for each cell in the first group. A line is drawn in Column 2 to separate the group just formed from the next group. This procedure is repeated for each cell in the second group, etc., through the second group from the bottom.
n

72

Chapter 5: Minimization of Combinational Circuit

Therefore, the second column will contain all 2-cell implicants. Furthermore, each implicant represented will have the same index as the smallest index and will be missing the variable represented by the power of two numbers. 4. If there exists a pair in one group and a pair in the group below with the same power of two variables removed, then the remaining variables must be the same. Further, if the base number of the lower pair is greater by a power of 2, then yet another variable can be removed. If so, the two pairs are checked off in Column 2 and the four numbers placed in Column 3 along with the powers of 2 of the bit positions now vacant. Every group of four cells in this column therefore represents a four-variable implicant. The smallest number represents the base number of the normal form with both variables removed. 5. This process is continued as more columns are formed until no more combinations are possible. The groupings that have not been checked off are the prime implicants and which, of course, may appear in any column. Examples are shown in Tables 5.5 and 5.6 for Figures 5.1 and 5.3. Table 5.5. Quine-McCluskey Reduction for Figure 5.1. (0,8) (8) B (3,7,11,15) (4,8) A 0 (8,9) (1) C 8 (3,7) (4) 3 (3,11) (8) 5 (5,7) (2) D 9 (9,11) (2) E 7 (7,15) (8) 11 (11,15) (4) 15 Table 5.6. Quine-McCluskey Reduction for Figure 5.2. (1,3) (2) A 1 (1,5) (4) B 3 (3,11) (8) C 5 (5,13) (8) D 11 (11,15) (4) E 13 (13,15) (2) F 15 The following comments are in order: Every entry in the table is an implicant, the first column contains 1-cell implicants, the second contains 2-cell implicants, the third contains 4-cell implicants, etc. All terms in the left-most column (Column 1) are single cell implicants. Any prime in that column will have a Criterion 1 cost equal to the number of input variables, a Criterion 2 cost of 1 and a Criterion 3 cost equal to 1 more than the Criterion 1 cost. The Criterion 1 cost is decreased by one for each column to the right. This is also true for Criterion 3, except that there is no cost of 2 for Criterion 3 - it has a cost of 1 instead. (Why?) All representations of terms can be put in algebraic form by forming the minterm representing the base number and removing the variables in the bit positions represented by the power of 2. The Quine-McCluskey Procedure is guaranteed to find all primes.

1. 2. 3. 4.

73

Chapter 5: Minimization of Combinational Circuit

Normally, if the tabular procedure is used for finding the primes (as opposed to using a Karnaugh Map), the prime table coverage procedure is also used to find the minimum cost realization, providing a total tabular technique. Problem 5.7. Use the Quine-McCluskey Procedure to find all prime implicants for the functions given below (repeated from Problem 5.3): a. f(x,y,z) = (2,5,6,7) + dc = (7,12,13,15) b. f(w,x,y,z) = (1,3,6,9,11,14) c. f(w,x,y,z) = (1,3,5,7,12,13,15) + dc = (6,14) 5.9. Minimization of Conjunctive Normal Forms The processes for finding the prime implicates are identical to those for finding the prime implicants, except that the decimal equivalent of the 0's and don't cares is placed in the lefthand column. From that point until the algebraic forms of the prime implicants are extracted, the procedure is identical to that used for disjunctive forms. When the sum terms representing the primes are extracted, the convention for conjunctive terms must be followed. Problem 5.8. Use the Quine-McCluskey Procedure to find all prime implicates for the functions given below (repeated from Problem 5.3): a. f(x,y,z) = (2,5,6,7) b. f(w,x,y,z) = (1,3,6,9,11,14) + dc = (7,12,13,15) c. f(w,x,y,z) = (1,3,5,7,12,13,15) + dc = (6,14) 5.10. Multiple Function Minimization If a circuit has three outputs, the previous methods may be used to develop circuits that are minimum cost for the individual circuits. However, it might be possible to reduce the cost overall if some terms could be shared between functions. Working with dnf functions requires viewing the intersection of all the functions with each other and in all possible combinations. This is extremely difficult with Karnaugh maps. For example, to find all primes that would be candidates for a minimum cost realization of three functions f1, f2 and f3, you would need to find the prime implicants of f1, f2, f3, f1f2, f1f3, f2f3 and f1f2f3. However, both the Quine-McCluskey procedure for finding all primes and the prime table reduction process require only minor modifications to solve this problem. The effort expended to solve the problem will be only a bit more than that used in minimizing a single function. 5.11. The Tagged Quine-Mccluskey Procedure The Quine-McCluskey Procedure introduced in Section 5.8 is modified as follows for multiple output functions. Steps 1 and 2 remain the same, except a "tag" that shows in which functions this cell is an implicant is appended to each element in the column. 3. The same algorithm used for a single function is also used to determine the 2-cell implicants. The tag will show in which functions the 2-cell implicants occur. The tag is most easily determined as the intersection of the tags for the two cells that are combining to go in the next column. Important: The cells that join are not checked off unless their tag is identical to the intersection tag. 4. This process repeats for all columns. The result is that every prime in every function and in every possible intersection of the functions becomes evidenct. Each unchecked table entry is a prime and the map in which it would be prime is the intersection of the functions represented in the tag.
74

Chapter 5: Minimization of Combinational Circuit

As an example, consider now the following three functions. The task is to find all (prime) implicants of interest. Given: with dc = (6) a. f1(x,y,z) = (0,2,3,4) with dc = (6) b. f2(x,y,z) = (0,2,5) with dc = (6) c. f3(x,y,z) = (3,4,5) We must find the Prime Implicants of all intersections of the three functions. Although we would not generally use Karnaugh Maps, they are used here to emphasize the process. xy xy xy z z 00 01 11 10 00 01 11 10 z 00 01 11 10 + 0 1 1 0 0 1 1 + 1 0 0 0 + 1 1 0 0 0 1 1 0 1 0 0 1 0 1 0 1 f1 z,xy f 2 xz, yz, xyz f 3 xy, xz, xyz xy xy xy z z 00 01 11 10 z 00 01 11 10 00 01 11 10 0 1 1 + 0 0 0 0 + 0 0 0 0 + 1 1 0 0 0 0 1 0 0 0 1 1 0 1 0 0 f 1 f2 xz, yz z f 1 f3 xy 00 01 11 10 0 1 0 0 0 0 + 0 0 0 xz, xyz f 2f 3 xyz, xyz

f xyz 1 f2 f3 Note that when a prime is present in more than one map, the important location is the map with the greater number of functions. B. By Tagged Quine-McCluskey: (0,2) f1f2-(2) B (0,2,4,6) f1---- (2,4) A 0 f1 f2-- ---(0,4) f (4) ---------1 2 f1 f2-- -------------(2,3) f1---4 f1 --f3 (1) C f -(2,6) f (4) D ---------1 2 G (4,5) ----f3 (1) E 3 f1 --f3 (4,6) f1--f3 5 --f2 f3 H (2) F 6 f1 f2 f3 I

75

Chapter 5: Minimization of Combinational Circuit

Problem 5.9. Use the tagged Quine-McCluskey Procedure to find all primes of interest (both implicants and implicates) of a system(x,y,z) with the following three output specifications: a. f1(x,y,z) = (0,1,2,3) + dc = (6) b. f2(x,y,z) = (2,3,6) c. f3(x,y,z) = (1,4,5) + dc = (4) Having found the building blocks of interest, the problem now becomes finding a minimum cost cover. The concepts developed for single output functions are still valid, but the issue is complicated a bit depending on the cost criterion used and the need to keep track of the coverage of all the output functions. We begin the same way for all three criteria. The Table of Primes is constructed using a single table, but with the cover columns for each function grouped together. Function boundaries are more critical to Criterion 3 than for Criteria 1 and 2, but even for Criteria 1 and 2 they are important. In listing the primes at the left of the table, it is desirable, but not essential, to order them according to the number of functions they are in. The x's are placed in the columns where coverage occurs as before, but only for those functions that appear in the tag of the prime. With Criterion 1, the cost is the number of literals and if a gate is used, then its second and subsequent uses do not increase the basic system literal cost. Thus, the second and subsequent uses have no cost. This is not a particularly good criterion since it costs something to wire the connection (and provide an input port). However, we will continue with its use to point out the implications. The principal implications will be found in the decision that "since it has zero cost, it will be used. A similar problem occurs with Criterion 2 and the number of terms. Since the second and subsequent uses of a term do not increase the number of terms, the second and subsequent uses cost zero. Criterion 3 is the only criterion discussed here that will attach a cost to second, third, etc. uses of an implicant. The first use cost will be the same as for a single function. However with Criterion 3, the second use will still cost 1 input to the output gate. Here, we have a situation where the cost changes from the full cost on the first use to 1 for each subsequent use. Since Criterion 3 is more realistic, it will be considered first. The principal changes that must be made to the original table reduction rules are concerned with essential primes, column domination and the notation used with the Petrick Function. With single functions, we could cross off dominated columns because the implicant that covered the column with the fewest x's would also cover the column with the most x's. This is still true. However, with multiple functions, covering an x in another function will introduce an additional cost of 1 with Criterion 3. With Criterion 3, column domination across function boundaries is not permitted. This also affects the rule regarding essential primes. Only those cells (columns) within the functions in which the essential prime occurs can be crossed off. This means that the row that represents an essential prime can be crossed off only within the function in which it is essential. However, the fact that the prime will be constructed and used means that subsequent use will be 1, which will affect row removal. These changes are reflected in the modified Prime Table reduction rules for Criterion 3 as shown in Table 5.7.

76

Chapter 5: Minimization of Combinational Circuit

Table 5.7. Prime Table Reduction for Criterion 3 Essential Primes (a column contains only one x). A. Circle the x. B. Place an * next to the name to denote that it is essential. C. Add the name to the Petrick function using a subscript to denote that function in which it is essential. If it is essential in more than one function, include it once for each function (with the appropriate subscript). D. Remove all columns covered by the prime in (only) those functions where it is essential. E. Delete the row in (only) those functions where coverage has been deleted or does not exist. F. Reduce the cost to 1 for subsequent use in other functions. II. Column Domination. A. Delete all columns that dominate other columns within the same function (only). III. Row Domination. (Only across the entire table - never on an individual function basis. Note that this is exactly the same as for single function tables.) A. Remove rows which are both 1. dominated by another row and 2. of greater cost than that row (or of equal or greater cost if we don't require all possible minimal forms). I. The formation of the Petrick function is also modified as each essential prime is tagged with functions in which it was essential when placed in the Petrick function. When the table cannot be reduced further, the completion of the Petrick function proceeds as before, except that function tags are included with each prime name. When the Petrick function is "multiplied out," the entire name (with the subscript) represents the prime. For example A1A1 = A1 but A1A2 = A1A2. That is to say, no reduction can take place unless subscripts are the same. In evaluating the Petrick function, the A1 and A2 are recognized as being the same term. For the first appearance, the term has a full cost associated with it while subsequent appearances have a cost of only 1. Note that this is only true with Criterion 3 (with Criteria 1 and 2, subsequent appearances will have cost of 0). This process is carried out for the original example in Figure 5.3. Problem 5.10. Given the following Petrick function, where Criterion 1 costs are A=3, B=2, C=4, evaluate all possible realizations for all criteria (1, 2, and 3). P = A1 (B1+C1)(A2+C2)(A3+B3) Problem 5.11: Given the following Petrick function and the associated Q/M Primes, complete the design, showing the circuits: (Assume we have function of X, Y, and Z) a. If the Q/M table was developed about the 1s of the function, and b. If the Q/M table was developed about the 0s of the function. p = A1B2B3C2D3 (0,1,4,5) (1,4) A (1,3) (2) B (4,5) (1) C (4) D
77

Chapter 5: Minimization of Combinational Circuit

f1 7 4 0 2 7 ** A 6 10 C **E X X X 8 X X X X X X 8 3 7 4 X 1 0

f2 1 2

f3 2 5 3 3 10 4 10 5 Cost 1 3 10

5 1 *B D 9 F

10 3 1 1 3 1 3

X X 3 2 X X X X

3 3 4 1 2 4 1

3 *G 8 ** 2 *H

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

B is essential in f2(0) H is essential in f2(5) G is essential in f3(3) 2 dominates 0 in f1 B is dominated by A (and cost is same) C is dominated by G (and costs more) A is secondly essential in f1(0) G is secondly essential in f1(3) F is now dominated by E (and cost is same) E is secondly essential in f3(4) 1 2 3 7 8 10 p = B2H2G3A1G1E3

There is a single realization B2H2G3A1G1E3 Cost: 3 4 4 1 1 3 = 16 Figure 5.3. Multiple Output Example - Criterion 3

78

Chapter 5: Minimization of Combinational Circuit

This results in the following circuit realization.


z x y z x y x z x y z A f 1 z 00 01 11 10 0 1 1 1 1 0 1 0 0 xy

f 3

xy z 00 01 11 10 0 0 0 0 1 1 0 1 0 1

B f 2 H xy z 00 01 11 10 0 1 1 0 1 0 0 0 0 1

Figure 5.4. Multiple Output Example Realization When using Criteria 1 or 2, the process is simplified by the fact that the second and subsequent uses are considered to have zero cost. The method for Criterion 3 can be modified as follows: 1. With essential primes, treat the table exactly as for a single function. This means covered columns are removed in all functions. 2. The reason the columns can be removed is that the prime must be constructed and that the cost of subsequent use will be zero. Therefore, it is going to be used in all functions where it covers any cells. To keep track of this, place the prime in the Petrick Function once for each function in which it will be used, along with the subscript for that function. 3. Column domination can occur across function boundaries (as if it were a single function). However, a note must be made to remind us that when the dominated column is finally covered, the prime will also be used to cover the dominating column in the other function. 4. When evaluating the Petrick Function, the first appearance of a literal has full cost, but subsequent costs are zero. The example is now completed for costs using Criterion 1 and Criterion 2. These modifications are reflected in Table 5.8.

79

Chapter 5: Minimization of Combinational Circuit

Table 5.8. Prime Table Reduction for Criteria 1 and Criteria 2 Fundamentally, the same process is followed as with Criterion 3, with the following exceptions: I. Essential Primes. a. Remove the columns covered by the essential prime in all functions. b. As columns are removed for each each prime in each funrtion, enter the prime in the Petrick Function with the subscript denoting the function for those columns. c. Delete the entire row. The cost actually has gone to zero, but this is immaterial since the whole row is deleted. II. Column Domination. a. Delete any column that dominates another column regardless of function boundaries. b. However, if the column that is retained is in another function, a note must be made that when the dominated column is covered, the prime must be used not only in the function where the column exists but in all functions containing columns that were crossed off because they dominated. III. Row Domination. a. Same as always. It is true for Criterion 2 that, if only a minimal cost realization is desired (not all), we only need to look for coverage. (Cost of all rows is the same.) Table 5.9. Prime Table Reduction-Criteria 1 and 2
f1 1 0 A C 4 1 E * B D 5 ** F 3 2 * G * H X 2 X X X X X 3 X X 1 X X X 5 X 1 2 3 3 5 4 X 1 0 f2 1 2 2 5 3 3 f3 5 4 2 5

Cost 1 2

X X X X

2 2 2 2 3 3

1. 2. 3.

B is essential in f2(0) - Use also in f1 because cost is 0. H is essential in f2(5) - Use also in f3 because cost is 0. G is essential in f3(3) - Use also in f1because cost is 0.
80

Chapter 5: Minimization of Combinational Circuit

4. 5.

E is dominated by F (cost is same). F is secondly essential in f3(4) - used also in f1(free).

1 1 2 2 3 3 5 5 p = B2 B1 H2 H3 G3 G1 F3 F1
Cost 2 0 3 0 3 0 2 0 = 10

Table 5.10. Multiple Output Example - Criterion 1

f1 1 1 3 6 0 2 3 4 4 A C 5 E X X X 6 X X 1 X X X X X X 1 0

f2 1 2 2 5 3 3

f3 6 4

2 5

Cost 1 1

X X X 6 3 X 2 X X

1 1

1 * B D 6 ** F 3 * G 2 * H 1. 2. 3. 4. 5. 6.

1 1 X 1

B is essential in f2 - Use also in f1 because cost is 0. H is essential in f2 - Use also in f3 because cost is 0. G is essential in f3 - Use also in f1 because cost is 0. A is dominated by F (cost is same). E is dominated by F (cost is same). F is secondly essential in f1 and f3. 1 1 2 2 3 3 6 6 p = B2 B1 H2 H3 G3 G1 F1 F3 Cost 1 0 1 0 1 0 1 0=4

81

Chapter 5: Minimization of Combinational Circuit

Problem 5.12. Perform prime implicant table reduction using (only) the rules for essential primes. a. for Criterion 1; b. for Criterion 3. C1 C2 C 3 C4 C 5 C 6 C7 C8 C9 Cost 3 A X X X B X 2 X X C 2 X 4 D X X E X 2 X X F 2 X 2 X X G Problem 5.13. Perform prime implicant table reduction using (only) the rules for column domination. a. for Criterion 1; b. for Criterion 3. C C C C C C C C C 1 2 3 4 5 6 7 8 9 Cost X A X 3 X X B 2 X 2 C X X X 4 D X X X X E 2 X X X X X 2 F X X X 2 G X Problem 5.14. Perform prime implicant table reduction using (only) the rules for row reduction. a. for Criterion 1; b. for Criterion 2; c. for Criterion 3. (Note: Costs are for Criterion 1.) C 1 C 2 C3 C4 C5 C 6 C7 C8 C9 Cost 2 A X X 3 X X B 3 X C X 2 X X D 3 X X E 3 X X X X F X X 3 X X X G X X X 4 H X X Problem 5.15. Perform prime implicant table reduction on the following table: a. for Criterion 1; b. for Criterion 2; c. for Criterion 3. (Note: Costs given are Criterion 3 costs.) C C C C C C C C C 1 2 3 4 5 6 7 8 9 Cost 4 X A X X X 5 B X X 4 C X X 5 D X X 3 X X E X F Problem 5.16. Consider systems with n inputs and m outputs. a. If we allow the m outputs to contain the same function, how many different systems can there be with n inputs and m outputs? b. If no two output functions can be the same, how many different systems can there be? Problem 5.17. For an n input, m output system, how many different Karnaugh maps would have to be considered to find all primes of interest in minimizing a design?
82

Chapter 5: Minimization of Combinational Circuit

Problem 5.18. For all three criteria, find a minimum cost dnf and a minimum cost cnf circuit for Problem 5.9. Problem 5.19. For Criteria 2 and 3, find a minimum cost dnf and a minimum cost cnf circuit for the following 3 output system. + d.c. = (1,3,6) f1(w,x,y,z) = (4,5,9) f2(w,x,y,z) = (0,4,5,7,11) + d.c. = (1,3,6) f3(w,x,y,z) = (0,1,7,11) + d.c. = (3,4,9) 5.12. Hazards With Combinational Circuits It was mentioned in Chapter 3 that physical systems generally fail to meet the mathematical postulates exactly, creating problems called hazards. With switching systems, there exists a problem created by delays in the circuit. With combinational circuits, the delays result in failure to meet the postulates defining the complement: a a = 0, a+ a = 1. This is shown in Figure 5.5. It is assumed that a delay occurs in the formation of the complement of x. There is an additional complication created by the fact that the rise and fall times are not zero, but the effect is still typically a delay.
x _ x _ x x _ x+ x

Figure 5.5. The Hazard: Effect of Delays in Digital Signals The problem arises as a result of changing signals. One solution is to disable circuits during the time that signals are changing. This method is generally used in large systems because it eliminates a tremendous amount of synchronization problems. Such circuits are called synchronous circuits or pulse mode circuits because the principal way of making certain all circuits have quiesced is to use a single pulse, generally called a clock pulse, that is set to a rate that guarantees all circuits will have quiesced. The use of a synchronizing signal implies that circuit action will be slowed down in general since the clock rate must guarantee the integrity of the slowest part of the circuit. When speed is worth the additional design costs, the circuits are designed for asynchronous operation which permits circuits to operate at their natural speeds. The mode of operation is referred to as either asynchronous or as fundamental mode, as opposed to synchronous or pulse mode. In these circuits, hazards must be considered, as the pulses shown in Figure 5.5 can cause improper operation. These pulses are greatly exaggerated in that figure, and with standard oscilloscopes would normally be too narrow to be visible. They represent some very pesky problems. However, there is a simple design procedure that permits avoidance of such problems at some (generally minor) expense.

83

Chapter 5: Minimization of Combinational Circuit

Consider the circuit of Figure 5.7: A x z

xy z 00 01 11 10 00 0 1 0 1 1 0 1

01 0

Figure 5.7. f(x,y,z) = xz + y z . The circuit is examined for existence of unwanted pulses when signals change. This is the situation where the input state changes from one cell on the Karnaugh Map to another. First we note that if x and z are both high, then A will be 1 and any change in y cannot affect the output. There is no possibility of a pulse error. We can generalize to the following statement: 5.9. In a dnf realization, if a signal stays within the 1-cells covered by an implicant formed by the circuit, no extraneous pulses will be generated. If we examine the movement between 0-cells, we see that the delay in a signal holding a gate low cannot produce a pulse since, if the output is low before and after a signal change at the input, some other input to the gate has to be holding the output low. Therefore, no and gate will have any signals that will cause its output to go high. If we examine the transition from a 0-cell to a 1-cell, we see that the worst thing that can happen is a delay in obtaining the 1. Similarly, in going from a 1 to a 0-cell, the worst thing that can happen is a delay. There is one case remaining. This is a movement from a 1-cell covered by one implicant to a 1-cell covered by another implicant, both being circuit realizations and mutually exclusive in their coverage. Returning to Figure 5.7, this occurs with a movement from cell 6 to cell 7, or from cell 7 to cell 6. From the circuit point of view, there will be either a high at A which goes to 0 as B goes high, or there will be a high at B which goes to 0 as A goes high. It is obvious that a problem may exist, since in the real world two events will never happen at exactly the same instant. The problem occurs if one output goes low before the other goes high which would allow the output of the or gate to drop to 0. In examining the terms, we see that the motion from cell 6 to cell 7 is basically a commutation from term y z to term xz. The delay is in z , and the y z term will be slower to change. Since it is a 1, the output of the other gate will rise to a 1 before this drops, and the delay will have no effect. However, if the input states change so that the commutation is from cell 7 to cell 6, the delay in output B will allow output A to drop to 0 before output B rises to 1. The result is that a very sharp, short negative-going pulse appears at the output z. This is called a static-1 hazard since the output is supposed to stay at 1. We may therefore state: 5.10. Circuits designed from dnf are subject to static-1 hazards if 1-to-1-cell transitions are not covered by the circuit implementation. The remedy is especially simple. The circuit must be designed so that all 1-to-1-cell transitions are covered. The design process covered earlier in this chapter can be modified to cover hazard-free design. 5.11. All 1-cell implicant pairs (which appear in Column 2 of the Quine-McCluskey Procedure) must be covered to ensure hazard-free operation in a dnf realization. To implement this feature into the design process, the following is recommended:

84

Chapter 5: Minimization of Combinational Circuit

a. In the Quine-McCluskey procedure, when creating Column 2 for a single output circuit, if both cells are 1-cells (that is, neither is a don't care) then circle the pair. If the circuit is a multiple-output circuit, circle the tags of the functions that have both cells as 1-cells. b. When setting up the prime table for coverage, first place the circled pairs across the top, then include any (non-don't cares) 1-cells that have not been covered by the pairs. All other aspects of design remain the same. Since cnf circuits are duals of dnf circuits, the dual problem occurs. In this case, the problem is that a positive pulse may be generated in moving between 0-cells. 5.12. Circuits designed from cnf are subject to static-0 hazards if 0-to-0 transitions are not covered by the circuit implementation. The remedy follows: 5.13. All 0-cell implicant pairs (which appear in Column 2 of the Quine-McCluskey procedure) must be covered to ensure hazard-free operation of a cnf circuit. 5.14. Since there is a delay through every gate as well as through inverters, multi-level networks are more complicated to analyze, and it is possible to obtain pulses with 0-to-1 and 1-to-0 transitions. These have been named dynamic hazards. It has been shown, however, that if the 1-to-1 transitions for dnf portions of circuits and 0-to-0 transitions for cnf portions of circuits are covered, then the operation will be hazard-free. The hazard-free prime table of coverage for the example in Figure 5.1 becomes the one shown in Table 5.10 (instead of the one shown in Table 5.1). Pairs are taken from the Quine-McCluskey procedure in Table 5.5. Table 5.11. Hazard-Free Table of Coverage for Q/M Procedure in Table 5.5

0 8 8 9

3 7

3 11

5 7

9 11

7 15

11 15

{3,7,11,15} X X X X {0,8} X {8,9} X {5,7} X {9,11} X Note: If there are don't care cells, pairs containing the don't cares cells should not appear in the coverage table. However, any non don't care cells that do not appear in the pairs must appear singly for coverage purposes. Generally, there will be a large percentage of essential primes.
Problem 5.20. Use the Quine-McCluskey method to design a hazard free circuit in dnf for the function: f(w,x,y,z) = (0,3,5,7,10,15) with d.c. = (2,4,13). Problem 5.21. Design a hazard free cnf circuit for the function: f(w,x,y,z) = (0,1,4,5,6,7,10,11,12,13,14)

85

Chapter 5: Minimization of Combinational Circuit

. 5.13. Additional Problems for Chapter 5 Problem 5.22. Given f(x,y,z) = (0,1,7) + d.c. = (2,3,6), determine if the following terms are implicants or not and, if they are implicants, whether or not they are prime. d. y g. y z a. x y e. x +y h. x y b. x y i. x x c. x + z f. x Problem 5.23. Given f(w,x,y,z) = (0,4,5,7) + d.c. = (1,2,8,9,12,13,15), determine if the following terms are implicants, implicates, prime implicants, prime implicates, essential prime implicants or essential prime implicates. d. z y g. w x y z a. y e. x + z h. w+x+ y+z b. w f. z + y i. w y c. w y Problem 5.24. Given that g is an implicant of h, indicate if the following statements are true or false. a. g is an implicant of h. b. g is an implicant of h . c. h is an implicant of g. d. h is an implicate of g. e. g is an implicate of h . Problem 5.25. Given that g is an implicate of h, indicate if the following statements are true or false. a. g is an implicate of h. b. g is an implicate of h . c. h is an implicate of g. d. h is an implicant of g. e. g is an implicant of h . Problem 5.26. Given f(x,y,z) = (0,2,3,6), draw the Karnaugh Map. a. Write down all implicants, and show which ones are prime. b. Write down all implicates, and show which ones are prime. c. Show which primes are essential. Problem 5.27. For each of the following Karnaugh maps, write the function in a minimal (Criterion 3) form for both dnf and cnf. wx wx wx yz yz 00 01 11 10 00 01 11 10 yz 00 01 11 10 00 1 1 0 0 00 1 0 0 1 00 0 1 0 0 01 0 0 0 0 01 0 1 1 0 01 1 1 1 1 0 0 0 0 11 11 0 1 1 0 11 0 1 0 0 10 1 1 0 0 10 1 0 0 1 10 0 1 0 0 a. b. c. Problem 5.28. Given f = (1,3,6,7) + d.c. = (4), draw the Karnaugh Map. a. Write all implicants, and show which ones are prime. b. Write all implicates, and show which ones are prime.
86

Chapter 5: Minimization of Combinational Circuit

c. Show which primes are essential. Problem 5.29. Given f must be 1's in cells 4,6,7,11,12,14 and 15, f must be 0's in cells 0,1,2,5,8 and 10: a. Draw the Karnaugh map, showing don't care cells as pluses. b. Find all prime implicants and prime implicates. Problem 5.30. Given f = (4,6,11,12,14,15) + d.c. = (1,3,9,13) a. Draw the Karnaugh Map. b. List all prime implicants. c. List all prime implicates. d. List all essential primes. Problem 5.31. Given f = (0,4,5,6,7,13) + d.c. = (8,14,15). Using the Karnaugh map: a. Find all prime implicants. b. Find all prime implicates. c. How many single cell implicants are there? d. How many two-cell implicants are there? e. How many single cell implicates are there? f. How many two-cell implicates are there? g. Write all prime implicants in both decimal and algebraic forms. h. Write all prime implicates in both decimal and algebraic forms. Problem 5.32. Repeat Problem 5.31, given f = (3,4,5,6,11,14,15) + d.c. = (8,9,10) Problem 5.33. For the functions shown in the following Karnaugh Maps. a. Write all prime implicants in both decimal and algebraic forms. b. Write all prime implicates in both decimal and algebraic forms. wx wx wx yz 00 01 11 10 yz 00 01 11 10 yz 00 01 11 10 00 + 1 0 0 00 1 0 1 1 00 0 1 1 0 01 1 0 1 + 01 0 1 + 0 01 0 0 1 0 11 1 0 1 0 11 0 1 + 0 11 1 1 0 + 10 + 1 0 0 10 1 0 1 1 10 1 1 1 1 a. b. c. Problem 5.34. Find minimal (Criterion 3) expressions in dnf for the Karnaugh maps in Problem 5.33. Problem 5.35. Find minimal (Criterion 3) expressions in cnf for the Karnaugh maps in Problem 5.33. Problem 5.36. For the functions shown in the following Karnaugh Maps: a. Write all prime implicants in both decimal and algebraic forms. b. Write all prime implicates in both decimal and algebraic forms. wx wx wx yz 00 01 11 10 yz 00 01 11 10 yz 00 01 11 10 00 0 1 0 0 00 1 + 0 1 00 0 1 1 1 01 1 1 1 1 01 1 1 0 1 01 1 1 + 1 + + 11 0 0 11 + 1 1 + 11 0 + + 0 10 0 1 0 0 10 0 0 0 0 10 0 0 1 1 a.
87

b.

c.

Chapter 5: Minimization of Combinational Circuit

Problem 5.37. Find minimal (Criterion 3) expressions in dnf for the Karnaugh maps in Problem 5.36. Problem 5.38. Find minimal (Criterion 3) expressions in cnf for the Karnaugh maps in Problem 5.36. Problem 5.39. Given f = (1,3,6,7) + d.c. = (4), use the Quine-McCluskey method to find all primes (for dnf and cnf). Problem 5.40. Use the Quine-McCluskey method to find the primes (dnf and cnf) for the function in Problem 5.29. Problem 5.41. Use the Quine-McCluskey method to find the primes (dnf and cnf) for the function in Problem 5.31. Problem 5.42. Use the Quine-McCluskey method to find the primes (dnf and cnf) for the functions in Problem 5.32. Problem 5.43. Perform table minimization on the following prime tables for essential primes (only). e a b c d a b c d a b c d e A X X A X X A X X X X X B X X X X X X X B B X C X C X C X X X X D D X X X X X D X X X

X X X X Problem 5.44. Perform table minimization on the prime tables in Problem 5.43 for column domination (only). Problem 5.45. Perform table minimization on the prime tables in Problem 5.43 for row domination (only) for each of the three criteria. (Assume costs are A=3, B=3, C=1, D=2, E=3 for Criterion 1.) Problem 5.46. Tables which can be reduced no further by the simple rules are said to be cyclic. For each of the following tables, specify whether or not the table is cyclic and if it is not cyclic, the reason it is not. If it would be cyclic under certain criteria and costs, specify. a b c a b c d a b c a b c d A X A X A X X X X A X X X X B X X B B X X X X X B X X X X C C X X C X C X D X X X X X D X D X X E X
a. b. c. d.

Problem 5.47. Given that Petrick Function is AB+AE, draw the minimal cost circuits (Criterion 3) a. if the implicants are A = wx, B = y z, E = w x z . b. if the implicates are A = w+ y , B = x + z , E = w + x +y

88

Chapter 5: Minimization of Combinational Circuit

Problem 5.48. Reduce the prime table shown and develop the complete Petrick Function for each criteria. (Cost is Criterion 1 cost.) C1 C 2 C 3 C 4 C 5 Cost A X 2 B X X 4 C 5 X X X D 3 X X X E 3 X X F 4 X X Problem 5.49. Reduce the prime table shown and develop the complete Petrick Function for each criteria. (Cost is Criterion 1 cost.) C 1 C 2 C 3 C 4 C 5 C 6 C 7 Cost
A B C D E F G
a b X X X X a. c X X cost 3 2 2

X X

X X X

X X X X X X X X

2 2 1 4 3 1 5
a X X b X X c X X X c. d X X cost 3 2 3 3

X X X X

X X X
a b X X X X X b. c

X X

Problem 5.50. Given the following cyclic tables, find the Criterion 3 cost of all realizations.
A B C A B C D X cost 3 4 2 X 3 A B C D

Problem 5.51. The following are selected primes from a Quine-McCluskey reduction of f(w,x,y,z). A = (1,3,9,11) (2,8) B = (2,6) (4) C = (11) Write the corresponding algebraic expression and its Criterion 3 cost if it represents: a. the ones (and don't cares) of the function. b. the zeros (and don't cares) of the function. Problem 5.52. The following are selected primes from a Quine-McCluskey reduction of f(w,x,y,z). A = (8,10,12,14) (2,4) B = (7,15) (8) C = (3)
89

Chapter 5: Minimization of Combinational Circuit

Write down the corresponding algebraic expression and its Criterion 3 cost if it represents: a. the ones (and don't cares) of the function. b. The zeros (and don't cares) of the function. Problem 5.53. Given the following resultant Petrick Function and associated prime implicants, show the algebraic forms of all minimal realizations and draw the final circuits. p = A(B+C) A = (0,2,8,10) (2,8) B = (2,3,10,11) (1,8) C = (3,7,11,15) (4,8) Problem 5.54. Given the following resultant Petrick Function and associated prime implicates, show the algebraic forms of all minimal realizations and draw the final circuits. p = C(D+E) C = (0,1,4,5) (1,4) D = (4,6,12,14) (2,8) E = (4,5,12,13) (1,8) Problem 5.55. Given the following functions, draw all Karnaugh maps of interest and find all prime implicants of interest. +d.c. = (7) f1(x,y,z) = (1,5) +d.c. = (7) f2(x,y,z) = (2,3,6) +d.c. = (7) f3(x,y,z) = (0,1,5,6) Problem 5.56. Perform a tagged Quine-McCluskey reduction to obtain the prime implicants of interest for the functions in Problem 5.55. Problem 5.57. Using the method of your choice, find the minimal cost dnf and cnf circuits for the functions expressed in the following Karnaugh maps for Criterion 3. wx wx wx yz yz yz 00 01 11 10 00 01 11 10 00 01 11 10 0 0 1 00 + 00 1 1 0 + 00 0 1 0 0
01 11 10 0 1 1 + + 0 a. + + 0 0 1 1 01 11 10 0 0 0 1 1 1 b. 1 + + 0 0 0 01 11 10 1 1 0 1 + 0 c. 0 + 0 1 1 0

90

Chapter 5: Minimization of Combinational Circuit

Problem 5.58. Using the method of your choice, find the minimal cost dnf and cnf circuits for the functions expressed in the following Karnaugh maps for Criterion 3. wx wx wx yz 00 01 11 10 yz 00 01 11 10 yz 00 01 11 10 1 1 0 00 1 00 + 0 1 + 00 1 1 0 0

01 11 10

1 0 0

1 + 1

1 0 1

0 + +

01 11 10

1 0 1

0 1 0

1 0 0

+ 0 +

01 11 10

1 0 1

0 + 0

0 + 1

+ + 0

b. a. c. Problem 5.59. Find the prime implicants for f1, f2, and f1f2 from the following Karnaugh maps. wx wx yz 00 01 11 10 yz 00 01 11 10 00 1 00 + 1 1 0 0 1 + 01 11 10 1 0 0 1 + 1 f1 1 0 1 0 + + 01 11 10 1 0 1 0 1 0 f2 1 0 0 + 0 +

91

Chapter 5: Minimization of Combinational Circuit

Problem 5.60. Perform a tagged Quine-McCluskey reduction to obtain the prime implicants of interest for the following multi-function specifications. +d.c. = (3,11,15) a. f1(w,x,y,z) = (0,1,2,6,7,13) +d.c. = (0,1,7) f2(w,x,y,z) = (4,8,12,15) b. f1(w,x,y,z) = (0,3,4,6,10,12) +d.c. = (7,8,14,15) +d.c. = (6,10,12,13) f2(w,x,y,z) = (0,1,5,8,9,15) +d.c. = (4,5) c. f1(w,x,y,z) = (0,1,6) + d.c. = (3) f2(w,x,y,z) = (2,4,5) +d.c. = (6) f3(w,x,y,z) = (1,2) Problem 5.61. Perform a tagged Quine-McCluskey reduction to obtain the prime implicants of interest for the following multi-function specifications. +d.c. = (7,13) a. f1(w,x,y,z) = (3,5,6,14,15) +d.c. = (7) f2(w,x,y,z) = (2,3,6) +d.c. = (7,15) f3(w,x,y,z) = (5,13) +d.c. = (7,15) b. f1(w,x,y,z) = (1,3,4,5,13) +d.c. = (7,15) f2(w,x,y,z) = (5,6,14) +d.c. = (7,15) f3(w,x,y,z) = (3,6,11,14) +d.c. = (7,13) c. f1(w,x,y,z) = (3,5,6,14,15) +d.c. = (7) f2(w,x,y,z) = (2,3,6) +d.c. = (7,15) f3(w,x,y,z) = (5,13) +d.c. = (12,13,14) d. f1(w,x,y,z) = (0,1,4,5,8) f2(w,x,y,z) = (2,3,4,8,11,15) +d.c. = (12,13,14) f3(w,x,y,z) = (0,1,2,3,4,11,15) +d.c. = (12,13,14) +d.c. = (7,13) e. f1(w,x,y,z) = (2,3,10,11) +d.c. = (7) f2(w,x,y,z) = (4,5,12) +d.c. = (7,13) f3(w,x,y,z) = (5,9,12) +d.c. = (7,11,13,15) f. f1(w,x,y,z) = (3,6,12,14) +d.c. = (7,11,13,15) f2(w,x,y,z) = (2,5,6,9,14) (4,9,12) +d.c. = (7,11,13,15) f3(w,x,y,z) = +d.c. = (7,15) g. f1(w,x,y,z) = (1,5,10,13) +d.c. = (7,15) f2(w,x,y,z) = (3,5,11,13) +d.c. = (5,7,13) f3(w,x,y,z) = (1,3,9,11) Problem 5.62. Perform a tagged Quine-McCluskey reduction to obtain the prime implicates of interest for the multiple output functions in Problem 5.61. Problem 5.63. Perform a tagged Quine-McCluskey reduction to obtain the prime implicates of interest for the multiple output functions in Problem 5.62.

92

Chapter 5: Minimization of Combinational Circuit

Problem 5.64. Perform prime implicant table reduction using (only) the rules for essential primes. a. For Criterion 1, b. For Criterion 3. C1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 Cost
A B C D E F G H X X X X X X X X X X X X X X X X X 3 2 2 4 2 X X 2 2 2

Problem 5.65. Perform prime implicant table reduction using (only) the rules for essential primes. a. For Criterion 1; b. For Criterion 3. C1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 Cost

B C D E F G H I J

X X

X X X X X

X X X X X X X X

3 3 3 4 4 4 4 X 4 4

X X X

X X

Problem 5.66. Perform prime implicant table reduction using (only) the rules for row reduction. a. For Criterion 1; b. For Criterion 2; c. For Criterion 3. (Note: Costs are for Criterion 1.) C 1 C 2 C 3 C4 C 5 C 6 C 7 C 8 Cost 2 A X

B C D E F G

X X X

X X X X X

X X X

X X X

3 3 2 3 3 3

93

Chapter 5: Minimization of Combinational Circuit

Problem 5.67. Given the following prime implicant table where the costs are those associated with Criterion 1, perform table reduction using: a. Criterion 1 b. Criterion 2 c. Criterion 3. C1 C 2 C 3 C 5 C 6 C7 Cost X 2 X A

B C D E F G

X X X X X X X X X X X

3 3 2 3 3 3

Problem 5.68. Given the following prime implicant table where the costs are those associated with Criterion 1, perform table reduction using: a. Criterion 1 b. Criterion 2 c. Criterion 3 C1 C 2 C 3 C 4 C 5 C 6 Cost

A B C D E F

X X X X

X X X X

X X X X X X X X

2 3 3 2 3 3

Problem 5.69. Given the following prime implicant table where the costs are those associated with Criterion 1, perform table reduction using: a. Criterion 1 b. Criterion 2 c. Criterion 3 C C C C C C C Cost 1 2 3 4 5 6 7 X X X X X A 3 X X X X X B 4 X X C X X 4 X X D X 4 X X E 3

94

Chapter 5: Minimization of Combinational Circuit

Problem 5.70. Perform prime implicant table reduction on the following table: a. For Criterion 1; b. For Criterion 2; c. For Criterion 3. (Note: Cost given is Criterion 3 cost.) C C C C C C C C C Cost 1 2 3 4 5 6 7 8 9 A X 4
B C D E F G H X X X X X X X X X X X X X X X X X X X X 5 4 5 3 3 4 4

Problem 5.71. For each of the following tables, specify whether or not the table is cyclic and if it is not cyclic, the reason it is not. If it would be cyclic under certain criteria and costs, specify. f f f f f f 2
a A B C D X X X X X X X X X X b
1

e A B C D

a X X

b X X X

c X X X

d A B C X D

a X X X

b X X

e X X X

X X X

X X

a. b. c. Problem 5.72. Perform all reductions on the following Table of Primes that have to do (only) with essential primes. C C C C C C C C Cost 1 2 3 4 5 6 7 8

A B C D E F G H

X X

X X

X X X X X X X X X X

X X X X

1 3 3 3 4 4 3 5

95

Chapter 5: Minimization of Combinational Circuit

Problem 5.73. Perform all reductions on the following Table of Primes that have to do (only) with column domination. C C C C C C C C Cost 1 2 3 4 5 6 7 8
A B C D E F G H X X X X X X X X X X X X X X X X X X X X X X 3 3 4 4 5 3 4 4

Problem 5.74. Perform all reductions on the Table of Primes in Problem 5.72 that have to do (only) with row removal. Problem 5.75. Perform all reductions on the Table of Primes in the Problem 5.73 that have to do (only) with row removal. Problem 5.76. Perform all reductions on the following Table of Primes and form the Petrick Function. Perform the reduction for Criteria 2 and 3. C C C C C C C C C 1 2 3 4 5 6 7 8 9 Cost
A B C D E F G X X X X X X X X X X X X X X X X X X X X X X 3 3 4 4 4 3 1

Problem 5.77. Perform all reductions on the following Table of Primes and form the Petrick Function. Perform all reduction for Criteria 2 and 3. Cost C 1 C 2 C3 C4 C5 C 6 C 7 C 8 C9

A B C D E F G

X X X

X X X X X X X X X X X X

1 1 4 4 4 3 3

X X

X X

X X

96

Chapter 5: Minimization of Combinational Circuit

Problem 5.78. Perform all reductions on the following Table of Primes and form the Petrick Function. Perform the reduction for Criteria 2 and 3. C C C C C C C C C 1 2 3 4 5 6 7 8 9 Cost

A B C D E F G

X X

X X X X X X X X X X X X X X X X

X X

3 4 5 5 4 3 4

Problem 5.79. Perform all reductions on the following Table of Primes and form the Petrick Function. Perform the reduction for all three criteria. C1 C 2 C 3 C4 C 5 C6 C C8 C C 10 Cost 9 7 A 3 X X 4 X B X 4 X X C X 4 X D X 4 X X X E 3 X X F 1 X X G X X X X X 3 X X H X X X X X X I 3 Problem 5.80. Perform all reductions on the following Table of Primes and form the Petrick Function. Perform reduction for Criteria 2 and 3. C C C C C C C C C C 1 2 3 4 5 6 7 8 9 10 Cost
A B C D E F G X X X X X X X X X X X X X X X X X X X X X X X X 3 4 3 4 3 4 3

97

Chapter 5: Minimization of Combinational Circuit

Problem 5.81. Perform all reductions on the following Table of Primes and form the Petrick Function. Perform the reduction for Criteria 2 and 3. C C C C C C C C C C 1 2 3 4 5 6 7 8 9 10 Cost

A B

X X

X X

X X X X

X X

X X C X X X X 4 D X X X X X X X X X X 4 E Problem 5.82. Perform all reductions on the following Table of Primes and form the Petrick Function. Perform the reduction for Criteria 2 and 3. C C C C C C C C10 Cost 4 5 2 3 6 1 7 C8 C 9 1 X A X X X X X 3 X X B X X 3 C X X X 3 D X X X 4 E X X X X X 4 F X X X X X 4 G

1 3 4

98

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

6. Nand, Nor, Xor, etc., Combinational Circuits


The design of combinational circuits in Chapters 4 and 5 centered around the +, , and the operators. These operators are consistent with the Boolean Algebra and permitted the development of switching theory to this point. There are other binary switching operators that can be defined and several have been implemented for use in switching circuits. In general, design of switching circuits takes place with +, , and the operators and the results are then converted for use with other types of circuits. The principle reason for this lies in the (lack of) properties of the resultant operators. The three principle operations of interest are the nand, nor, and xor operations. There is no standard algebraic symbol for these operators, although there are standards for the circuit symbols. In this chapter, the following operations and circuits will be discussed in relation to combinational design: Nand operations Nor operations Xor operations Multiplexers as combinational circuits ROMS, PROMS, etc. Two of these, nand and nor, are each in themselves sufficient to build any circuit. When developing discrete component circuits, the choice of gates to be used is generally up to the designer. However, when designing integrated circuits, the fabrication technology may very well dictate the type of gates to be used. In this chapter it will be assumed that when nand gates are used, they will be used exclusively, and that when nor gates are used, they will be used exclusively. 6.1. Nand Operations The nand operation is defined as nand(x,y) = x y . In this text, the nand operations will be represented algebraically as the stroke () operator. Figure 6.1. shows several aspects of the stroke operator. We will refer to x y as Form 1 and x + y as Form 2. The nand gates, like or gates and and gates, can have any number of input variables. That is, xyz = xyz = x + y + z . Problem 6.1. Is the stroke operator commutative? (yes) associative? (no). Does it distribute over itself? (no) Prove these using Boolean algebra or Karnaugh Maps. x y 0 1 x 0 1 1 y x y = x y = x+y 1 1 0 Form 1 Form 2 Form 1 Form 2 (IEEE) Figure 6.1. Representations of the Nand Operation Several comments are in order: 6.1. In the number of Karnaugh map cells covered, the nand gate is effectively an or gate. Of course, it is an or gate with the complement of the input variables. 6.2. Form 2 is dnf. This implies that the nand gate circuits will have an important relationship to dnf circuits.

99

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

6.3. Since the nand operator is not associative, and does not distribute over itself, parentheses become very important and will always have a one-to-one relationship with the gates themselves. 6.4. The complementation through the gate will imply complementation at odd levels of gates back from the output and no complementation at even levels. x x x
x y x y z xy xyz Form 1 Figure 6.2. Nand Circuits x + xy + xyz Form 2

Consider now the dnf expression x + x y + x y z in Figure 6.2. We may, for the moment, mix our operator symbols and write (x) ( x y) ( x y z). We immediately recognize that each input is in Form 1, and can be directly converted to stroke notation: ( x ) ( x y) ( x y z). Again, several observations are in order. 6.5. A multi-term dnf expression can be converted directly to stroke notation if: a. each product term is placed in parentheses, and b. each operator is replaced with the stroke operator, and c. each single literal product term is complemented. We now note that x x = x and that x 1 = x . If a complemented literal is not available, it can be formed with a nand gate. Note also in Figure 6.2 that a judicious use of nand gate symbols shows the similarity to dnf and how the "double not-ing" effect cancels out between gates. The strong relationship to dnf carries even greater implications when we move to mixed form expressions. Before moving to mixed expressions, another example is in order. Consider f = xyz. This is surely dnf. But it is neither Form 1 nor Form 2, and these are the only two forms that can be realized with a nand gate. We must put it in either Form 1 or Form 2. This can be done as (( xyz) 1) or as xyz + 0. The second is in dnf and the previous algorithm can be applied yielding (xyz)(1). This yields exactly the same circuit as the first form. This leads to the following conclusions: 6.6. An expression must be in either Form 1 or Form 2 to begin circuit realization with nand gates. 6.7. At each level back, the input expression to the next level must be in either Form 1 or Form 2 to continue circuit realization. 6.8. The Nand Conversion Theorem: To change an expression from +, , to , a. insert all implied operators and parentheses, and b. use the generalized DeMorgan's Theorem to obtain an expression in which the unary operator operates only on literals.

100

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

c. "Patch up" the expression so that the highest order operator is + and that, through the hierarchy of parentheses, the binary operators alternate. d. Replace every binary operator with the stroke operator. e. Complement each literal that is an input to a gate an odd number of gates back from the output. (These would always be literals that were input to an or gate before conversion.) Note: The later property can be determined by even or odd parentheses. It also may be of some advantage to label the operators with the number representing their level back from the input. The highest level operator (+) is 1, the next level back () is 2, etc. This is done by counting (net) parentheses starting either from the right or the left. 6.2. Non-Negotiable Parentheses One reason for mixed forms is to provide subfunctions for other circuits, or to use subfunctions that have been developed in other parts of a system. If such functions are to be formed (or have been formed), then we do not have the privilege of rearranging the parentheses within an expression. For example, we could be asked to design a circuit to provide (w+x) + (y+z). The assumption is that (w+x) and (y+z) are needed elsewhere as terms. If we had the freedom to rearrange the circuit parentheses, then we would form (w+x+y+z) with one 4-input or gate. However, the parentheses imply that we are to form (w+x) and (y+z) individually with two input or gates and then to combine the result in a two input or gate. This adds a bit of complexity if it is to be done with nands. However, the algorithm above will still work, since we simply patch up the expression to yield: 3 2 1 3 2 ((w+x) 1) + ((y+z) 1) which yields: (( w x )1) (( y )1) The use of level numbers is recommended with fixed forms since it acts as an additional check on the validity of the expression. All (+) operators must be at odd levels and all () operators must be at even levels. Problem 6.2. Convert the following expressions to nand (stroke) expressions. (Assume parentheses are non-negotiable.) a. w x y + y z b. (x + z ) ( x + y) (w + x + y) c. xz+(xy+wx+(a+b))cd d. x(yz) +( x ( y z) + (y + z)) e. (u +(v + w ))(x(yz))( u +( v + w)) Final Observations: 6.8. The only constant that will appear in stroke expressions will be the constant 1. 6.9. Literals that are input an odd number of gates back from the output will be complemented from their form in a dnf expression. 6.10. Literals that are input an even number of gates back from the output will be uncomplimented from their form in a dnf expression. 6.11. A minimal dnf circuit will automatically produce a minimal nand circuit. (Proof lies in its strong relationship to dnf. If a cheaper nand circuit could be built, then we could convert backward to a cheaper dnf circuit, hence a contradiction.)

101

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

6.12. Conversions can be made directly between +, , circuits and nand circuits by observing that for equivalence: a. For the +,, circuits, the highest level operator must be an or operator and binary operators must alternate. b. Literals an odd number of gates back from the output in a nand circuit must be the complement of those in the +, , circuit. Problem 6.3. Convert the following circuits to nand circuits. (You may use complemented literals as inputs.) x x b. a. y y

c.

w x u y z x y

d.

w x u y z x y

e.

f.

z z 6.3. An Alternate Algorithm Since the IEEE standard symbol for a nand gate is a direct representation of the Form 1 algebraic expression, some prefer an algorithm that will always present the "next gate" in Form 1. The resultant algorithm will be the equivalent of 6.8a. Form 2 expressions, however, are converted to Form 1 by using DeMorgan's Law (in reverse). If an expression is not in either Form 1 or Form 2, an inverter must be constructed. If the expression to be converted is already complemented, then the inverter can be constructed by and-ing the expression (under the complement) with 1. If the expression to be converted is not complemented, then the inverter can be constructed by using a double complementation and then and-ing the lower complemented expression with 1. Consider the expression (u + v) + ((x y) z), which is basically in Form 2. (u + v) + ((x y) z) = (u + v) ((x y) z) = (u + v) | ((x y) z) but (u + v) = ((u + v) 1) = ((u + v) | 1) = (( u v ) | 1) = (( u | v ) | 1) and ((x y) z) = ((x y) | z and (x y) = (x y) = (x | y) = ((x | y) | 1) giving (( u | v ) | 1) | (((x | y) | 1) | z)

102

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

6.13. Alternate Nand Conversion Theorem a. Insert all implied operators and parentheses (high order operator is an or). b. "Patch up" the expression so that every level is in the form of a complemented and expression (Form 1). This may require: 1. modifying a Form 2 expression to Form 1 by using DeMorgan's law in reverse, or double not-ing and applying DeMorgans law once on the expression. 2. if an expression is complemented, but is not in Form 1, then make it Form 1 by and-ing the expression with 1 under the complement. 3. if an expression is not in Form 2 and is not complemented (it will be an and expression), apply a double complementation to the entire expression and and the lower complemented expression with 1 under the upper complementation. 6.4. Conversion From Nand To +, , Not Conversion from nand circuits or stroke notation is basically the reverse of the algorithms discussed for going the other way. First of all, the stroke expressions will already have all parentheses in place, each pair representing the output of a gate. The highest level gate will always convert to an or gate. The next level gate will convert to an and gate, etc., alternating each level and then, finally, the literals that are single inputs to odd level gates must be complemented. Problem 6.4. Convert the following stroke notation expressions to +, , a. (wx z ) | (yz) w b. a(bc)(d(ef)) c. (a((bc)a))((d(ef)) | 1) Problem 6.5. Convert the following circuits to circuits with and gates, or gates and inverters. (You may not use complemented literals. Show the not function with the inverter symbol.)

a.

x y v w x y z

b.

1 x w x

c.

d. y 1 z

103

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

6.5. Nor Operators The nor operation is defined as nor (x + y) = x + y . In this text, the nor operator will be represented algebraically as the dagger () operator. Figure 6.3 shows several aspects of the dagger operator. We will refer to x + y as Form 3 and x y as Form 4. Nor gates, like or gates and and gates, may have any number of input variables. That is, x y z = x + y + z = xyz. Problem 6.6. Is the nor operator commutative? (Yes) Associative? (No). Does it distribute over itself? (No). Prove these using Boolean Algebra or Karnaugh Maps. x y 0 1 x 0 1 0 x x y= x+y = x y y y 1 0 0 Form 3 Form 4 Form 3 Form 4 (IEEE) Figure 6.3. Representations of the Nor Operation

Several comments are in order: In the number of cells covered, the nor gate is effectively an and gate. It is, of course, an and gate with the complement of the input variables. Form 4 is cnf. This implies that the nor gate circuits will have an important relationship to cnf circuits. Since the nor operator is not associative and does not distribute over itself, parentheses become very important and will always have a one-to-one relationship with the gates themselves. The complementation through the gate will imply complementation at odd levels of gates back from the output and no complementation at even levels back. We now note that x + x = x and that x + 0 = x . If a complemented literal is not available, it can be formed easily with a nor gate. The student has probably noticed by now that the nor gate is a dual of the nand gate. Therefore, everything that has been said about the nand gate and dnf can now be said about nor gates with respect to cnf. Consider now a cnf expression x( x +y)( x + y + z). Recognizing that the expression is in Form 4, it can be realized directly with a 3-input nor gate by implementing x ( x +y) ( x + y +z) (where we have, for the moment, mixed operators in the expression). Each input to the nor gate is automatically in Form 3, and can also be directly implemented with second level nor gates giving: x ( x y)( x y z).

104

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

x x x y x y z

x x+y x+y+z Form 1 Figure 6.4. Nor Circuits x (x + y) (x + y + z) Form 2

From this we may observe the following. A multi-term cnf expression can be converted directly to dagger notation if: 1. each sum term is placed in parentheses, and 2. each operator is replaced with the daggar operator, and 3. each literal which is, by itself, a sum term is complemented. 4. An expression must be in either Form 3 or Form 4 to begin circuit realization with nor gates. 5. At each level back, the input expressions to the next level must be in either Form 3 or Form 4 to continue circuit realization. 6.5.1. The Nor Conversion Theorem To change an expression from +, , to : a. Insert all implied operators and parentheses. b. Use the generalized DeMorgan's Theorem to obtain an expression in which the unary operator operates only on literals. c. "Patch up" the expression so that the highest order operator is and, through the hierarchy of parentheses, the binary operators alternate. d. Replace each binary operator with the dagger operator. Complement each literal which is an input to a gate an odd number of gates back from the output. Problem 6.7. Convert the expressions in Problem 6.2 to nor (dagger) notation. Note: The only constant that will appear in dagger expressions will be the constant 0. Literals that are input an odd number of gates back from the output will be complemented from their form in a cnf expression. Literals that are input an even number of gates back from the output will be uncomplemented from their form in a cnf expression. A minimal cnf circuit will automatically produce a minimal nor circuit. and nor circuits by observing that for 6.14. We may convert directly between +, , equivalence: circuit, the highest level operator must be an and operation and the a. For the +, , binary operators must alternate. b. Literals an odd numbers of gates back from the output in a nor circuit will be the complement of those in the +, , circuits. Problem 6.8. Convert the circuits in Problem 6.3 to nor circuits.

105

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

6.5.2. Conversion Of Nor To +, , Not To convert from nor to +, , , there is a gate-to-gate correspondence with the highest order gate being an and gate, with the operators alternating back from the output and with literals that are single inputs to odd level gates complemented. Problem 6.9. Convert the following dagger notation expressions to +, , . a. (wx z ) (yz) w b. a(bc) (d(ef)) c. a((bc)0)(d(ef)) d. a(bc((de)0)) Problem 6.10. Convert the following circuits to +, , circuits. (You may not use complemented literals. Show necessary not functions with the inverter symbol.)
a. x y b. 0 x w x 0 y 0 z

c.

v w x y z d.

6.5.3. Conversion Between Nand and Nor Notice that the complement of a Form 1 expression is automatically in Form 4 (a product form) and the complement of a Form 2 term (a sum term) is automatically in Form 3. A circuit realization in nand must therefore be "patched" up to begin realization with nor gates. If the original expression was or-ed with 0 to begin a circuit realization (complementation at the high order gate) then the or-ed 0 should be removed (the high order gate compliments removed). If the original circuit was already in Form 2, the original expression will have to be or-ed with one to obtain a Form 4 expression (a nor complementer added at the higher level). The equivalent alternation of or and and operations for both circuits guarantees that all remaining gates may be replaced on a one-to-one basis. However, since all inputs will have moved from even levels to odd levels and vice versa, the literals must all be complimented. The result is the following statement: . To convert between nor and nand: If the first level gate is a complementer, it should be removed. If the first level gate is not a complementer, then one must be added. Then all inputs must be complemented. 1. With nands, an expression is complemented A1 2. With nors, an expression is complemented A0 3. So, we can say: f(x,y,z,) = (f( x , y , z ,))0 and: f(x,y,z,) = (f( x , y , z ,))1.

106

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

Problem 6.11. a. Convert to : (xy) (yz) b. Convert to : ((xz) (y z ))1 c. Convert to : (x y) (y z) d. Convert to : ((x z) (y z ))0 Problem 6.12. a. Convert the expressions of Problem 6.4 to nor notation. b. Convert the expressions of Problem 6.9 to nand notation. c. Convert the circuits of Figure 6.4 to nor gates. d. Convert the circuits of Figure 6.7 to nand gates. 6.5.4. An Alternate Nor Conversion Algorithm As with the nand operation, an alternate algorithm exists to convert Boolean logic to nor logic utilizing only the equivalent expression for the IEEE standard symbol, Form 3. The algorithm is as follows. a. Insert all implied operators and parentheses. b. "Patch up" the expression so that every level is in the form of a complemented or expression (Form 3). This may require: 1. Modifying a Form 4 expression to Form 3 by using DeMorgan's law (in reverse). 2. If an expression is complemented, but is not in Form 3, then make it Form 3 by or-ing the expression with 0 under the complement. 3. If an expression is not in Form 4 and is not complemented (it will be an or expression), apply a double complementation to the entire expression and or the lower complemented expression with 0 under the upper complementation. 6.6. Xor Operations The xor operation is represented by the symbol and finds considerable use a. To complement one variable under the control of another, or b. To develop an output which is a function of the index of the input, as for example, with parity checking. Unlike nand and nor circuits, the xor circuit is only defined for two inputs: x y 0 1 x 0 0 1 xy= xy+xy y 1 1 0 Figure 6.5. Representation of the xor Operation

It is seen that for x y, a. if x is the constant 0, then 0 y = 1 y + 0 y . b. if x is the constant 1, then 1 y = 0 y + 1 y . The output of the gate can be either the input y or its complement depending on x. This provides us with a circuit that allows control of the output between y and y . Notice that the output of x y, will be high when the index of the input is odd (01 or 10). If the index of the input is even (00 or 11), then the output is low. The circuit can be

107

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

cascaded (x y z) to yield a system that will be high if and only if the total index is odd. This can be shown in the Table of Combinations for odd and even as in Figure 6.6. x y z odd index xy 00 01 11 0 0 10 0 0 z 0 0 1 1 0 0 0 1 0 1 1 0 1 0 1 1 0 0 1 0 1 1 1 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 Figure 6.6. Table of Combinations for Odd Indexes; Map of Odd Indexes
The map of odd cells in Figure 6.6. emphasizes the complementation that occurs between even and odd cells for x and y when the variable z is complemented. This even/odd index is characteristic of xor circuitry, and constrains the use of xor logic to the development of "checkerboard" type functions. Problem 6.13. Determine if the xor operator is commutative (yes), associative (yes) and whether it distributes over itself (no). If the xor operator is the only operator in an expression, the fact that it is associative means that any parentheses can be removed. That is to say, the parentheses serve no purpose. Problem 6.14. What is xx?, xxx?, xyx?, xyxzx? Problem 6.15. Develop a circuit that will have a 1 output if and only if the domain elements of a system (w,x,y,z) do not have even parity. Problem 6.16. Develop a circuit that will have a 1 output if and only if the domain elements of a system (w,x,y,z) do not have odd parity. Problem 6.17. Develop a parity checking circuit for a system (w,x,y,z) that will be able to check domain elements for either odd or even parity under the control of variable e. 6.7. The Multiplexer as a Combinational Circuit Device The basic intent of a multiplexer is to reduce transmission costs through sampling. There are two types of multiplexers available. The analog multiplexer operates exactly as indicated in Figure 6.7. However, there are also digital multiplexers that are intended only for 0,1 input signals that are input to and gates under control of the xyz signals. Early electro-mechanical multiplexers consisted of synchronized motors controlling rotating switches as shown in Figure 6.7. The associated circuits must contain sampling circuitry and the speed of sampling must be fast enough that the associated delays do not adversely affect the system. With the electro-mechanical system, the synchronism is established through the electrical motor power supply. Figure 6.7 shows an electronics switching multiplexer where the synchronizing signals that control the switches are generated externally and wired into the electronic switches.

108

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

Rotary electro-mechanical multiplexer

X X X X X X x X X y

X X X X z X X X X x

X X X X y

X X X X z

Electrical Switching Multiplexer Figure 6.7. Electro-Mechanical and Electrical Switching Multiplexers

The electronic switching system performs exactly the same function since the two systems of xyz switches are synchronized. Generally, the switches are operated from the outputs of counters driven by crystal-controlled clocks that are brought into synchronism by signals also transmitted over the line. The multiplexer may be used to realize any truth table over the domain of the switch control signals. Each switch setting is viewed as an element in the domain and the input is selected as either 0 or 1, whichever is the functional value for that domain element. 6.8. ROMS, PROMS, PALS, PLAS There are several devices available for effectively setting up combinational logic. These vary from unlimited to limited with respect to the function that can be formed over the input variable space, and from units fixed during manufacturing to units programmable by the user. 6.8.1. ROMS ROM is an acronym for read only memory, a name that applies more to its use than its construction. There are many types of read only memories, including magnetically coupled and electrostatically coupled devices. For use in combinational circuits however, the transistor coupled ROM can be used to realize any function. The ROM is normally "factory built" to specifications. The company is given a truth table that is used to develop a mask. The ROM has a complete 2 to 2n decoder, and the mask is used to create the or gate connection to the output line(s). ROMs are available with a single output or with multiple outputs (generally 8). Although simple in production, it requires production of large quantities to be economical. 6.8.2. PROMS PROMS are field programmable ROMS. They have a full n to 2n decoder feeding the output gates. Each input decoder gate is "fused" so that it may have its connection to the or

109

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

gate broken. When the truth table for a function is to be realized, the prom is placed in a circuit that "blows" the desired fuses in a process generally referred to as "burning in." PROMS can be mass produced since they are not re-designed for each use. 6.8.3. EPROMS EPROMS are erasable field programmable PROMS. In these units, the "fuses" are conductive regions or links that can be permanently depleted of carriers (at least for many years). Instead of "blowing" fuses the desired links are depleted of carriers (also in a process called burning-in). However, these devices can be returned to a programmable state by returning carriers to the depleted region, generally through an ultra-violet radiation process. 6.8.4. PAL PAL is the acronym for Programmable Array Logic. In the PAL, there is generally a limited number of programmable and gates feeding each output or gate. Also, the number of and gate inputs is not necessarily the same for each and gate. Since the input is not intended to be a full n to 2n decoder, PALs exist for up to 16 inputs. 6.8.5. PLA PLA is the acronym for programmable logic array. The PLA has both programmable and gates and programmable or gates. Again, the number of and gates and or gates is limited, but they are general gates for the domain of the input variables. The PLA is an ideal device for design of multiple output circuits, since it permits the use of developed primes in all functions. The tagged Quine/McClusky Procedure is ideal for design of circuits that will use PLAs. PLAs are also available with flip-flops and with the output of the or gates available as inputs, and are called logic sequencers. 6.9. Additional Problems for Chapter 6 Problem 6.18. Convert the following functions to nand operations (use stroke operator). (Note: Parentheses are "non-negotiable.") a. a+b b. ab c. ab+cde d. (a+b)(c+d) e. (ab)(c+d) f. (a+b) + (c+d) Problem 6.19. Convert the following functions to nand operations (use the stroke operator). (Note: Parentheses are "non-negotiable.") a. f = w x y + y z b. f = xz+xy+wxy c. f = (x+z)(x+y)(w+x+y) Problem 6.20. Convert the following functions to nand operations (use stroke operator). (Note: Parentheses are "non-negotiable.") a. f = (a+ b (c+d e f))(x y + z ) b. f = xz + (xy + wx (a+b))cd c. f = wx(a+bcd)(y+z) Problem 6.21. Convert the following functions to nand operations (use stroke operator). (Note: Parentheses are "non-negotiable.") a. f = w+( x +y) + a b

110

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

b. f = x y( a + b) c. f = w (( x +y)z+a) d. f = a b + c(de + f ) e. f = ((a+ b )+c)(d+ f h) f. f = ( x +y z )( y z+b) + a d Problem 6.22. Convert the following functions to nand operations (use stroke operator). (Note: Parentheses are "non-negotiable.") a. f = x z + xy + w b. f = x (y + z )(w+z) c. f = (( x y)z+xz) w d. f = ( x +z y )wv + w xz e. f = a b c (a+ b +xy( c +d)) f. f = a+b c (d+ e f)(g h +i) Problem 6.23. Convert the following functions to nand operations (use stroke operator). (Note: Parentheses are "non-negotiable.") a. f = (x+ y ) z ( u + v ) b. f = (ab(cd))(e+fg) c. f = ((ab)c+d) + e +gh d. f = a b+ c (d+e)( f g) e. f = (a+b) + ( c + d ) + f(g h ) f. f = w (x+( y +wz)(x+y)
Problem 6.24. Convert the following nand expressions to + , , a. f = ( w x z )(yz) b. f = (zb)(cd)(ef) c. f = a(bc)(d(ef)) d. f = (a(bc))(c(fg)h) e. f = (ab)c(ef) Problem 6.25. Convert the following nand expressions to +, , a. f = a( b (cd))( e f) b. f = (ab)c(de) c. f = (ab)((cd)ef)g d. f = (ab)(c(de)f)

111

Chapter 6: Nand, Nor, Xor, etc. Combinational Circuits

Problem 6.26. Convert the following expressions to nor (dagger) notation. a. a+b b. abc c. (a+b)(c+d) d. ab+cd e. (a+b)(cd) Problem 6.27. Convert the functions in Problem 6.19 to nor (dagger) expressions. Problem 6.28. Convert the functions in Problem 6.20 to nor (dagger) expressions. Problem 6.29. Convert the functions in Problem 6.21 to nor (dagger) expressions. Problem 6.30. Convert the functions in Problem 6.22 to nor (dagger) expressions. Problem 6.31. Convert the functions in Problem 6.23 to nor (dagger) expressions. Problem 6.32. Convert the following functions to +, , . a. f = (w x z )( w y) b. f = (ab)(cd)(efg) c. f = (abc)e(a(bc)) d. f = ab(c(de)f) Problem 6.33. Convert the following functions to +, , . a. f = ab(c(ef)g) b. f = (((a(bc))e)f)g c. f = (ab)((c(de)f)g)h Problem 6.34. Convert the functions in Problem 6.24 to nor (dagger) notation. Problem 6.35. Convert the functions in Problem 6.25 to nor (dagger) notation. Problem 6.36. Convert the functions in Problem 6.32 to nand (stroke) notation. Problem 6.37. Convert the functions in Problem 6.33 to nand (stroke) notation. Problem 6.38. Convert the following functions to +, , . a. f = (a(b(cb)))(c1) b. f = ab(ca)(ba) c. f = a(abc)(bcd) Problem 6.39. Develop the properties of an algebra built around the {,+} operators. Include theorems for a+(bc) and (a+b) (a+c). Problem 6.40. Develop a table for the operator {+,, ,,,+}, showing the distributive properties over each other and themselves. (Define distributivity of a binary operator over itself as a+(b+c) = (a+b)+(a+c) and for a unary operator over a binary operator as (a + b) = a + b . There is no definition for distributivity of an operator over a unary operator.) Problem 6.41. Use the symbol ? to define the operation a?b = a+ b . What properties exist for this operator? Can it be used to realize any function? Problem 6.42. Use the symbol : to define the operation a:b = a b . What properties exist for this operator? Can it be used to realize any function?

112

Chapter 7: Introduction to Sequential Circuits

7. Introduction to Sequential Circuits


7.1. Introduction The previous chapters have been devoted to the design of combinational circuits that provide output signals related to the input signals in the same way, regardless of past history. In fact, we were looking for circuits that would detect certain patterns or profiles at the input. The basic tool for synthesis of these combinational circuits was Boolean Algebra. Several additional tools were developed to help in the ultimate goal that included economical realization. These tools included the Quine-McCluskey procedure, the Coverage Table for Primes, and the Petrick Function for use in making economical decisions with regard to selection for final implementation. Attention now turns to the development of circuits that are designed to detect particular time variations in the patterns or combinations at the input. A particular combination can be meaningless unless it is imbedded within another set of combinations and in precisely the right sequence. Such circuits are called sequential circuits. When these circuits were first designed, there was no formal mathematical approach to use. Today, a theory of graphs has been developed and much of the material we will discuss here has been formalized under a subset of that theory. It might be said that the basic tools for synthesis of sequential circuits are in the theory of directed graphs. As with combinational circuit design, our goals, which include reliability under circumstances where the mathematical postulates do not hold, (as well as the always present pressure for economical circuits), leads to the use of additional tools. This chapter discusses the introduction of sequential circuit terminology and stresses the understanding of the original word problem and the development of a state graph and associated transition table. It is divided into the following sections: States, graphs and transition tables Steps involved in the design of sequential circuits Types of sequential circuits and their state graphs Types of sequential circuits and their transition tables. These sections are followed by a set of word problems that are to be put in a form for further work using methods described in subsequent chapters. 7.2. States, Graphs and Transition Tables To aid in the conceptual development of the techniques to be presented, some new terminology is now introduced. 7.2.1. Concept of a State in Sequential Circuits The student will already have some mental concept of the word "state." We have geographical states, states of mind, etc., and in the previous chapters we have spoken of the "input state" of a combinational circuit. The use of the word "state" in sequential circuits represents a particular condition that exists at some instant in time, encompassing not only the input variables, but the internal variables as well. 1. Input State. The particular combination of signals at the input of the circuit at a given point in time (represented as an ordered n-tuple) will be referred to as the input state at that time (this is the same as in previous chapters). 2. Internal State. The circuit itself will have to have some way of remembering the sequence of patterns it has received. These internal memory elements will be set

113

Chapter 7: Introduction to Sequential Circuits

on and off in order to accomplish the total function required. The condition on these internal elements at a specific point in time (taken as an ordered n-tuple) will be referred to as the internal state of the system (or circuit) at that time. 3. Total State. An ordered n-tuple consisting of the internal state and the input state will be referred to as the total state. This particular combination is used primarily in discussing the variables available for generating the required combinational circuits. 4. Output State. The condition of the outputs of the circuit (as an ordered n-tuple) will similarly be referred to as the output state (this is also the same as with combinational circuits). Under some circumstances, the output can be considered as a function of the total state, while in other circumstances it can be considered as a function of the internal state only. 7.2.2. Concept of a State Graph A directed graph, referred to here as a state graph, will be one of two principal organized entities used to represent the internal states that a system contains and that the system moves through perform its function. Each internal stable, or quiesced state, will be represented by a circle. The input states that cause the system to move to other internal states will also be represented on the graph, as will the output states. The actual placement of the additional information on the graph will be a function of the particular technology used and is discussed in greater detail in later sections. For now, consider the graph in Figure 7.1. x1 and x2 are level-type inputs (as opposed to pulses). If we begin with the system in state a (both inputs low and with zero output), then as long as x1 and x2 remain low, the system will remain in state a. If x2 goes high, then the system will move to state b, and remain there as long as the signals do not change further (the output will remain at zero during the transition and in state b). If, while in state a, x1 should go high, the system will move from state a to state c where it will stay (with an output of 1) until the input state changes.
x1 x2 / z x1 x2 z 00 0 10 / 1 c 01 / 0 b 01 0

10 1

Figure 7.1. Partial State Graph The lines connecting states are directed. That is, they imply passage from one state to another in one direction only. Also, since it takes time to set the internal memory elements to their new conditions, they involve a finite time interval called transition states. A transition state is made unique by the total state consisting of the internal state it is leaving and the input state which causes it to leave. Sometimes, but not always, an output may be specified
114

Chapter 7: Introduction to Sequential Circuits

to occur during the transition state. In Figure 7.1., it has been specified (and the assumption would be) that speed is important. That is, the transition state is used to anticipate the output. In such cases, the output must be a function of the total state. If the output were only a function of the internal states, then the anticipation could not be realized, and the output of one will not occur until the internal memory elements have been set. 7.2.3. Concept of a Transition Table The other organized entity that is used to represent a sequential system is called a transition table. This table has two sections, one involved with the transitions from one state to another under all possible input conditions, and the other involved with the output. It is always possible to develop a transition table from a state graph or to draw a state graph from a transition table; they contain the same information. It is human nature that some people find state graphs to be essentially useless. However, most people find them to be a useful image in developing a feeling for the sequential operation. It is a fact that most of the design work is concerned with the transition table. Figure 7.2 contains the portion of a transition table corresponding to the state graph in Figure 7.1.
Transition Table Current State a b c Next State Output State (x x ) Input State (x1 x Input State 1 2 2) 00 01 10 00 01 11 11 10 _ _ a 0 0 1 c b _ _ b 0 0 a d 0 _ _ c 1 0 a 0 d Figure 7.2. Partial Transition Table

The rows of the transition table represent the internal states and the columns in both sections represent the input state. The intersection squares represent the total states at any given time. In Figures 7.1 and 7.2, the system has total states which are stable and total states which are transient. The stable total states have been circled. The movement from one state to another, depicted by a directed line in the state graph, is not present in the transition table. However, it is implied. For example, if the system is in state a, with both inputs low, and if x2 goes high, then the movement is from the cell under column 00 horizontally to the cell under column 01. Note that the internal state remains the same while only the input state has changed. The cell represents the transition state (state graph line) going from state a to state b. Once the transition is complete, the system is now in internal state b, which is row b, with the input state being 01. It is now in the stable state represented by the symbol b on the state graph. It is common practice to circle the stable states in both sections of the table so that they are easily recognized.

115

Chapter 7: Introduction to Sequential Circuits

Transition Table Current State a b c Next State Input State (x1 x 2) 00 a a a 01 b b _ 10 c _ 11 _ d 00 0 Output State Input State (x1 x 2) 01 10 0 1 _ 0 _ 1

11 _ 0 0

0 c 0 d Figure 7.3. Sequential Movement to New Stable States

The statement can be made that when inputs change causing a system to go to another state, the motion in the transition table is always horizontal to the column representing the new input state. The motion is then vertical from the transition cell to the new row (internal state), as the internal memory elements are set, representing the new internal state. 7.3. Steps Involved in the Design of Sequential Circuits The process of designing sequential circuits can be broken down into the following steps: 1. Word problem to state graph and transition table 2. Transition table to reduced transition table 3. Selection of memory devices and state assignment 4. Design of combinational circuits to set the memory elements 5. Final analysis and hazard elimination 7.3.1. Word Problem to State Graph and Transition Table First, develop a transition table that properly represents the word problem. Most students will find this process easiest by first constructing a state graph and then constructing the corresponding transition table. Others may prefer to develop the transition table directly. Since the state graph and the transition table contain exactly the same information, there are parallel design methods for the two representations. The tabular form of the transition table makes it an easier form to work with in subsequent design stages, and it will be used as the basis for the design steps. 7.3.2. Transition Table to Reduced Transition Table The first transition table constructed directly from the word problem is frequently referred to as a "primitive transition table." It corresponds to the word problem directly and in a very simple way. However, a circuit developed from this table would not be the simplest circuit or the most economical to build. Since reliability is generally a function of the number of components, the circuit will also not be the most reliable. In the interest of reliability as well as economics, it is desirable to design a circuit that will perform the same function but require fewer internal memory elements. There are several advantages to this. The memory elements require combinational logic circuits to set them, and the number of combinational circuits required to set the memory elements is directly proportional to the number of memory elements required. Also, each memory element represents an internal variable that is used as the input to the combinational networks (those that set the memory elements as well as the output combinational circuit). Any reduction in the required number of internal memory elements will also reduce the number of variables input to the combinational networks, reducing their complexity.

116

Chapter 7: Introduction to Sequential Circuits

The net result is that there are substantial advantages to finding a minimum row transition table (equivalent to a minimum state graph) that will respond in exactly the same way to all possible input sequences as the primitive transition table. Such a table is called a minimum equivalent cover. The next chapter will view this process conceptually and introduce two methods that will guarantee the development of a minimum row transition table. 7.3.3. Selection of Memory Devices and State Assignment Once a minimum row transition table has been found, the circuit to realize the transition table can be designed. The process is not totally amenable to mathematical analysis because the most economical circuit to perform the function cannot automatically be developed. Instead, a procedure with guide lines is used where circuits are constructed that have a reasonably high probability of being minimal cost. The problems involved here are related to the fact that generally more than one type of memory element can be used. These memory elements, in addition to having different individual costs, may have one, two or more inputs. Usually, memory elements with more than one input will have simpler combinational networks feeding those inputs, but since there are more of them, the total cost may be greater. Generally, we must complete all designs that are possible candidates, since we have no way of knowing or computing in advance which system would be the best. Since each row in the transition table represents a particular internal state or condition of the memory elements (on or off), those conditions must at some point be selected to represent the states. This process is called state assignment. In some cases, it requires particular attention to races where signal propagation speeds can cause faulty operation and also affect the cost of the realization. Here, also, there are only guidelines for making good selections and no way of knowing for sure that the best selection possible has been made. These problems will be discussed in greater detail later. 7.3.4. Design of Combinational Circuits to Set the Memory Elements Once the states have been assigned, the combinational circuits can be designed. The procedure is one of establishing the truth tables for the various inputs and outputs. If minimization is desired, the multiple function Quine-McCluskey procedure can be applied. 7.3.5. Final Analysis and Hazard Elimination The final step in any design must be to check the operation with respect to the original word problem; that is, to analyze its operation. Even though a circuit has been properly designed, there are some circumstances when signal delays can result in improper operation. A failure under such circumstances is called an "essential hazard," and a final check on the transition table with simulated delays can check for such a failure. Circuit delays can always be added to insure hazard-free operation. 7.4.Types of Sequential Circuits and Their State Graphs It has been mentioned before that the state graphs take on slightly different forms depending on the technology. As the methods for synthesis evolved, it became convenient to group circuits into two categories, those which operate totally with level-type input signals and those which have one or more pulses also involved. The first, that is those systems which operate only with level-type inputs, are said to be fundamental mode circuits. Circuits with one or more pulses are said to be pulse mode circuits. Fundamental mode circuits are also frequently referenced as asynchronous circuits. Their use is limited, depending on the memory elements used and, to some considerable
117

Chapter 7: Introduction to Sequential Circuits

degree, upon the complexity of the system. They will operate as fast as the elements themselves permit, and for this reason, are preferred where speed is essential and the application is not so complex that all other concerns can be handled. Other concerns are primarily with the speed at which various signals propagate within the circuits and between circuits that are operating synchronously. For example, if a circuit is to perform differently asynchronously for a situation where variable x1 rises before variable x2 , then the circuit performing the task of decision making must never have any ambiguity over which signal rose first. This means that there are limits on the delays that are permitted in the x1 and x2 signals reaching the circuit. If this problem is multiplied many times over, it can reach unmanageable proportions and force the design of a "synchronized" system, where clock pulses or synchronizing pulses are used to make certain that all signals have had time to propagate and stabilize before the decision process is activated. These synchronized circuits are a subset of the pulse-mode circuits and are usually far easier to design for reliable operation. Fundamental mode design is really the most general and will, for that reason, be considered first. Circuits involving relays or relay-type devices must be designed in fundamental mode since the concept of pulsing a relay is not a valid one. Switching devices for analog signals will, in general, operate with levels and we use fundamental mode for that reason. Even if anticipation is not required, allowing the output to be a function of the total state may very well reduce the number of states required and, therefore, reduce the number of memory elements required along with their associative combinational logic. Pulse mode circuits may be required for a variety of reasons. The synchronized circuits mentioned above are used abundantly in computers, since the communication problems between circuits and systems operating in fundamental mode are untenable. Other systems, such as radar where pulses represent returning signals, or in radiation counters where the original signals are pulses, require a pulse sensitive technology and are also considered pulse mode. As a brief resume, the following properties of the two modes of operation are assumed to be in effect for the circuits to be designed.

118

Chapter 7: Introduction to Sequential Circuits

7.4.1. Fundamental Mode There are no clock pulses to synchronize operations or to protect against races. Input causing movement to new state
Output during transition to next state ( not entered on primitive state graph)

x1 x2

z1 z 2 b

x1 x2 z1 z 2

x1 x2 z1 z 2 a

Input when system quiesces (brought it to this state) Output when system quiesces State name x1 x2 z z 1 2 c x1 x2 z1 z 2

Figure 7.4. State Graph for Fundamental Mode 7.4.1.1. An Example of Fundamental Mode - Sample Problem No. 1 Design a circuit that operates in fundamental mode in accordance with the following specifications: 1. There are two inputs and one output (x1, x2, z). 2. The output is to be high if and only if x2 becomes high with x1 already being high. It is to stay high until x2 goes low and go low at that time. 3. The circuit continuously monitors for all above occurrences. 4. Consider both cases: a. The output is a function of the internal state only. b. The output is a function of the total state. (Assume speed is not important in the operation, but that reducing the number of components is important.) It is the nature of most state graphs that where we start to develop them is not important. In some cases there will be an obvious starting point, but in others, such as this one, there is not a definite state that is a beginning state. It is generally desirable to pick an unambiguous state; that is, a state which cannot lie at more than one point in the "successful" or "unsuccessful" sequences. (With some circuits, it is convenient to think about a "successful" sequence of inputs as that sequence which yields an output, and all other sequences as being unsuccessful.) A good candidate for a starting state is always with all inputs and outputs low. If it is a state which might exist at more than one point in the successful sequence, then we have a choice of selecting it and denoting sufficient conditions on the state graph to make it unique, or select a different state. In this case, the state with all inputs and the output low is a satisfactory starting state.

119

Chapter 7: Introduction to Sequential Circuits

With some circuits, especially those with success sequences, it is best to develop the success sequence horizontally across the page and then attach the other states as needed to complete the graph. With other circuits, a simple tree structure may be preferred. In this case, either structure could be used, but a tree structure may be for a novice since it carries a symmetry that is less likely to lead to errors. If we start with inputs and outputs low, then only two things can occur to cause the circuit to leave this state: x1 can go high, or x2 can go high. (In Fundamental Mode, it is not possible for more than one input to change at a time.) This is shown in Figure 7.5. In the primitive state graph, a change in input represents a transient condition that causes the circuit to move into a new state. This new state will be stable under new input state conditions. Then examine the new states to see: 1. What should the output be? Set it accordingly. 2. Has a state existed previously with this input state and this output state? If the answer to this second question is no, then the state is given a new name and it must be developed further. That is to say, it must go to new states under the admissible changes in the input. If the answer to question 2 is yes, then two states must be compared as to their overall future output action expected with changes on the input. If this state would have a future exactly the same as a previously developed state, then it is given the same name as that state and there is no need to develop it further.
x1 x2 z x1 x2 z Legend 01 0 b 01 0

00 0 a

10 0 c

10 0

Figure 7.5. First State Development For the case where the output is formed from the internal states only, the output for the transition states would not be included (nor would it appear in the legend). However, we should design fundamental mode circuits to use the total state in the output combinational network, as that may permit further circuit simplification through state reduction. For the case where the output is considered as a function of the total state, it would not have to be entered in the primitive state graph for the transition states, but the remaining design steps

120

Chapter 7: Introduction to Sequential Circuits

will be a bit easier if it is. A transition state between stable states that both have 0 output must also have 0 output. A transition state between stable states that both have a 1 output must have a 1 output. When the output of the next stable state differs from the output of the current state, then, if speed is not essential, the output should be assigned a don't care symbol. If the circuit is to anticipate certain output states, then the associated transition states would be assigned the desired output. However, if we are designing a circuit where speed or anticipation is not desired, the transition states should not be assigned, as their assignment would add constraints to the circuit in the next design phase with possible increase in cost as a result. Figure 7.6 shows the fully developed state graph for this problem. The outputs during the transient state have been entered under the assumption that speed or anticipation is not required.
x1 x2 z x1 x2 z Legend 01 0 b 01 0 11/0 d 11 0 10/0 10 0 c 00/01 1 11/1 10/(Developed as a tree structure) 10 0 11 1 e 00 0 00/0 00 0

a 01/0 01 0 b

00 0 a 00/0 10 0 c 10 0 11/e

00 0

a 01/1

11 1

Figure 7.6. Fully Developed State Graph: Sample Problem #1

121

Chapter 7: Introduction to Sequential Circuits

Problem 7.1. Construct state graphs for sequential circuits operating in fundamental mode to perform according to the following specifications (each is a separate problem). Assume that speed and anticipation are not required unless specifically stated. Construct state graphs for sequential circuits operating in fundamental mode to perform according to the following specifications (each is a separate problem). Assume that speed and anticipation are not required unless: 1. a. There are two inputs x1 and x2. b. The output is to be 1 if and only if both inputs are the same and x2 was the last variable to change. 2. a. There are two inputs R and S, and one output Q. b. R and S will never be high at the same time. c. If R is 1, then Q is to be 0 and is to remain 0 until S becomes 1. d. If S is 1, then Q is to be 1 and is to remain 1 until R becomes 1. 3. a. There are two inputs x1 and x2 and one output z. b. If x2 = 1, then z = x1. c. If x2 = 0, then z remains at its previous value. 4. a. There are two inputs x1 and x2 and one output z. b. The output is to be 1 every other time the input state is (x1,x2) = (1,0) and 0 otherwise. 5. a. There are two inputs x1 and x2 and one output z. b. The output is to be 1 if x1 was the last input variable to change and 0 if x2 was the last input variable to change. 6. a. There are two inputs x1 and x2 and two outputs z1 and z2. b. The output is to be a binary number that indicates the number of times either x1 or x2 has changed value up to three changes. On the fourth change of either x1 or x2, the output is to reset to the binary number 00, and the circuit is to renew its counting. 7. a. There are two inputs x1 and x2 and one output z. b. The output z is to be 1 if and only if the sequence of input states (x1,x2) has been = (0,0), (0,1), (0,0), (0,1), (1,1). c. The output is to be 1 only as long as the input state is (1,1) and the circuit is to monitor continually for that sequence. d. The circuit is to be designed to anticipate the next output value. 7.4.2. Pulse Mode Circuits In pulse mode circuits, pulses are used to synchronize state transitions. In these circuits, time is provided for all circuits to quiesce between pulses. When the pulse occurs, states will not have changed; as matter of fact, they must somehow be guaranteed not to change, otherwise signals to set or reset flip-flops would get confused. The effect of the synchronizing pulse is to signal the flip-flops to change and then to disable the inputs immediately so that any state changes cannot affect the circuit signaling until the next synchronizing pulse occurs. Races do not occur since all circuits are disabled and have time to quiesce before they are pulsed again. From a design point of view, there is no problem with more than one input changing at a time, since those changes all have time to take place during the period between clock pulses, and which signal changes first cannot affect operation. Also with fundamental mode, we visualized the system moving into a stable state for a given input state. However, with pulse mode, all states are like stable states because they are stable until a clock pulse comes along. It is possible to have pulse mode circuits that

122

Chapter 7: Introduction to Sequential Circuits

change states indefinitely with every clock pulse even though the inputs stay the same. We use a different picture to represent pulse mode circuits. There are two types of pulse mode circuits: Circuits with level inputs and a synchronizing pulse Circuits without level inputs or with more than one pulse input 7.4.3. Synchronized Circuits with Level Inputs Synchronized sequential circuits with level-type inputs form a very important class The following characteristics are common to circuits of this class. 1. Inputs are levels except for a synchronizing pulse. Changes in the input signals between pulses are of no concern. 2. All inputs will have stabilized before the pulse appears. (No level input change can occur coincidently with a pulse.) 3. All memory elements will have time to quiesce before the next pulse appears (actually a constraint on the clock rate). 4. Pulses are assumed to be of infinitessimal duration. (They result in memory elements being set, but they "freeze" the internal state during their presence so that the combinational logic decisions can be based on the internal state at the instant the pulse appears.) 5. Outputs are levels. Note: In pulse mode design, the output levels are generally not used except at pulse time. This means they may be unreliable until the circuit has settled. If they are only of interest at pulse time, they may be considered as outputs that are a function of the total state. If the output must be reliable during the entire period between pulses, then it will be a function of internal state only unless the input levels are well behaved (as for a fundamental mode circuit). If this is the case, the problem should be handled as fundamental mode, with the pulse considered as a level input as well. 6. State graphs for these types of circuits include: a. The internal state with associated name. (Note that the inputs are not associated with the stable or internal state since they may be changing erratically.) b. The output state desired with this internal state. c. A line representing the transition state along with the input state that will create the transition when the pulse occurs. That is, a state that will create the transition when the pulse occurs. (A pulse output may be associated with this transition since the pulse may be and-ed with the input and internal state variables to produce an output pulse. If the levels are interrogated only at pulse time, the output at that time is associated with the transition total state.) Note: The clock or synchronizing pulse does not need to appear in the state graph since the transition states cannot occur without it. 3 Figure 7.7 shows a reasonable format for the state graph for synchronized circuits with levels in.
3

Circuits which have outputs that are levels and are a function of internal state only are called Moore Machines.

123

Chapter 7: Introduction to Sequential Circuits

Input state at pulse time causing movement to new state Pulse output at pulse time x z3 b z1 z 2

a z1 z 2

State name Output (levels) when system quiesces x z3

c z1 z 2

Figure 7.7. State Graph for Synchronized Circuit With Levels Input
7.4.4. Pulse Circuits Without Levels Input 7.4.4.1. Circuits with a Single Pulse Input (no levels), or with Multiple Pulse Inputs (possibly with levels) 1. Any level inputs will have stabilized by the time any pulse input occurs. 2. In circuits with two or more pulse inputs, only one pulse can go high at any time. 3. Pulses may never occur so close together that internal memories will not have time to quiesce. 4. Level outputs can only be a function of the internal state (with exceptions as for the synchronized circuits above). 5. Pulse outputs may (should) be a function of the total state. 6. The state graph should contain: a. The internal state and its associated name. b. The output state associated with the internal state, if any. c. A line representing the transition state, with the input state that creates the transitions and the pulse outputs associated with the pulse inputs. Figure 7.8 shows a possible construction for a state graph associated with a system that has two pulse inputs and two pulse outputs. Note that transition can only occur with (p1, p2) = (0,1) or (1,0). All pulses are shown as signals (only one pulse can occur at a time). 4 The legends for both pulse mode circuits are essentially the same. These two types of circuits are generally considered together as one type of circuit (pulse mode).

Circuits wth output pulses that are a function of the total state are called Mealy machines.
124

Chapter 7: Introduction to Sequential Circuits

Input causing movement to new state Pulse outputs coincident with p or2 p 1 (respectively) p p /z z
1 2 3 4

b z1 z 2

a z1 z 2

State name Output when system quiesces p p /z z


1 2 3

c z1 z 2

Figure 7.8. Possible State Graph for Pulse-Driven Circuits 7.4.4.2. An Example of Pulse Mode - Sample Problem No. 2 Design a circuit that operates in pulse mode according to the following specifications: 1. There is one level input, one pulse input, and one pulse output. 2. The circuit is to provide a pulse output if and only if the level input is low for (at least) two successive pulses and then high for the next two successive pulses. The output pulse is to be coincident with the second of the two successive high recognitions. The circuit is to monitor the input continuously for the above sequence regardless of outputs. The procedure is exactly the same as for Fundamental Mode. In this case, it is not clear at the outset that there is a starting state that is unambiguous. We must note on the state graph the state at which we are starting. Again, we can choose any state. In this case, we happen to choose a state where the output signal will be low and the input x has been low for exactly one pulse.
name 1/0 d Legend x/z

1/0

0/0

1/0

1/1

d 0/0 a

x has been low for one pulse

0/0

0/0

Figure 7.9. State Graph for Sample Problem No. 2

125

Chapter 7: Introduction to Sequential Circuits

In this case, we have chosen to display the success sequence as a straight line. In developing the circuit, we see that from state d there will be no further high output until the system has gone through state a. State d may be viewed as a state waiting for x to go low a state which has no hope of output until x goes low. We can see now that state a is not ambiguous if we keep the note that x has been low for only one pulse. Figure 7.9 shows a completed graph for this problem. We see that state b represents a state where x has been low at least two cycles. If x goes high from state a, then there is no hope of output until x goes low again. This means it can go to state d. In developing the state graph, if we do not notice when a particular state is equivalent to one already developed, but continue to develop the state as though it were a new one, the state graph is not wrong (providing the correct outputs are maintained); it simply contains more states than it needs to contain. This creates a bit more effort in the reduction stage but is otherwise of no concern. The reduction process will reduce all correct state graphs (called equivalent covers) to the same minimal state system. Problem 7.2. Construct state graphs for sequential circuits operating in pulse mode to perform according to the following specifications (each is a separate problem). Display a binary count. 1. a. There is a level input x and a pulse input p, and four level outputs z1z2z3z4 to display a binary count. b. The circuit is to behave as an up-down counter, counting in binary from 0 to 12. c. The circuit is to count up if x is high and down if x is low. d. When counting up, on reaching 12, the count is to remain at 12 until x goes low. e. When counting down, on reaching 0, the count is to remain at 0 until x goes high. 2. a. There are two input pulses p1 and p2 and four level outputs z1z2z3z4 to display the binary count. b. The circuit is to behave as an up-down ring counter, counting in binary from 0 to 12. c. A p1 pulse is to cause the counter to count up and a p2 pulse is to cause the counter to count down. d. Counting upward from 12 produces a count of 0. e. Counting downward from 0 produces a count of 12. 3. a. There are two input pulses p1 and p2 and an output pulse z. b. There is to be an output pulse coincident with the last p1 pulse if and only if the sequence of pulses is p1,p2,p2,p1. 4. a. There is a level input x, a clock pulse p, and output pulse z. b. There is to be an output pulse coincident with the second clock pulse after the last (level) change in x. 5. a. There are two level inputs x1 and x2 and a synchronizing pulse. b. There is to be an output pulse if x1 is ever low for more than two successive clock pulses without x2 having changed. 7.5. Types of Sequential Circuits and Their Transition Tables The transition tables for all sequential circuits are very much the same and they are all reduced in the same way to minimum row transition tables. However, they do have slightly different structures depending on the technology involved, similar to the slightly different structures involved in the state graphs.

126

Chapter 7: Introduction to Sequential Circuits

The constraints on input and output signals are reflected in the transition tables, primarily through the possible absence of input state columns in the next state and output sections of the transition tables. Much of what will seem repetitious is discussed here to show the effect on the transition table. 7.5.1. Fundamental Mode If a circuit is to operate in Fundamental Mode, the inputs and outputs will always be of the level type (as opposed to pulses). The level input signals are of the same form as the output signals and can be used along with memory element outputs to form the circuit output signal. If the output is formed only from the output of memory elements, we say the output is a function of the internal states only. If the output is formed utilizing the input signals also, then the output is a function of the total state. Allowing the output to be a function of the total state has two advantages: 1. The circuit may require fewer internal states. 2. The circuit can respond faster (a form of anticipation). Generally, we should allow the output to be formed from the total state since it may result in fewer internal states. 7.5.1.1. Fundamental Mode Transition Tables 7.5.1.1.1. Outputs as a Function of Internal State Only In this case, the output is strictly a function of the internal state. Since internal states are represented by the rows of the transition table, there can only be one output state for each row. The layout of the transition table is shown in Figure 7.10. It is customary to circle the next state that is the same as the state representing the row since it represents the stable state condition. Don't cares (dashes) are indicated in those cells of the next state section that cannot be reached through admissible input state changes.
Transition Table Current State a b Next State Input State (x x ) 1 2 00 a 01 b 10 c 11 _ z 0

Figure 7.10. Transition Table for Fundamental Model (Ouputs as a Function of Internal State Only)
7.5.1.1.2. Outputs as a Function of Total State If the output is a function of the total state, there can be different outputs for each input state. The output section of the transition table must be expanded to permit this possibility. Since each row represents an internal state, there is one column in the output section for each input state. The typical Fundamental Mode Transition Table for this type of circuit is shown in Figure 7.11.

127

Chapter 7: Introduction to Sequential Circuits

Transition Table Current State a Next State Input State (x1 x ) 2 00 a 01 b 10 c 11 _ Output State Input State (x1 x ) 2 00 0 01 0 10 1 11 _

Figure 7.11. Fundamental Mode With Outputs a Function of Total State 7.5.1.2. An Example of Fundamental Mode - Sample Problem No. 1 The student is now referred to Figure 7.6 and to the section where the state graph was developed for Sample Problem No. 1. The following information is pertinent: 1. The mode is fundamental. 2. There are two level outputs, x1 and x2. 3. There is one level output - two cases are considered: a. The output is a function of internal states only. b. The output is a function of total state; speed is not important. Case a. Since the output is to be a function of internal state only, the transition table will have only one column devoted to output.
Transition Table Current State a b c d e f Next State Input State (x x ) 1 2 00 a a a _ _ a 01 b b _ b f f 10 c _ c c c _ 11 _ d e d e e z 0 0 0 0 1 1

Figure 7.12. Transition Table for Sample Problem No.1, Case a Case b. The output is allowed to be a function of input levels as well as the internal state. This means the output section must be expanded so that one column is available for each input state.

128

Chapter 7: Introduction to Sequential Circuits

Transition Table Current State a b c d e f Next State Input State (x x ) 1 2 00 a a a _ _ a 01 b b _ b f f 10 c _ c c c _ 11 _ d e d e e Output State Input State (x 2 ) 1 x 00 0 0 0 _ _ + 01 0 0 _ 0 1 1 10 0 _ 0 0 + _ 11 _ 0 * 0 1 1

*Note: If speed is not important this is a don't care, but if a 1 output is to be anticipated, this would be 1. +Note: If speed is not important this is a don't care, but if a 0 output is to be anticipated, this would be 0. Figure 7.13. Transition Table for Sample Problem No. 1, Case b

For this case where the output is considered as a function of the total state, the output for the transient states would not have to be entered in the table, but the remaining design steps will be a bit easier if it is. A transition state between stable states that both have 0 output must also have 0 output. A transition state between stable states that both have a 1 output must have a 1 output. If the circuit is to anticipate certain output states, then the associated transition states would be assigned the output of the stable state to which they are going. However, if we are designing a circuit where speed or anticipation is not desired, then the output for transition states (other than 0 to 0 and 1 to 1) should be left as don't cares, as their assignment would add constraints to the circuit in the next design phase with possible increase in cost as a result. It should be pointed out at this time that if we have a transition table for outputs as a function of internal state only, it can be easily expanded as above to permit the outputs to be a function of total state. Problem 7.3. Set up transition tables for the circuits specified in Problem 7.1. 7.5.2. Pulse Mode Pulse mode circuits as a class have many subclasses. 1. Inputs can be levels, but a clock pulse is used to synchronize the system and eliminate races. 2. Inputs can be pulses. 3. Inputs can be a mixture of levels and pulses. For each of these cases, the output can also consist of levels, pulses or a mixture of levels and pulses. We must use a bit of reasoning in setting up the transition table. For example, if an output is to be a level type output, then we cannot use any pulse inputs in the output combinational logic. The intent of using pulse mode in circuits with levels in and levels out is to make sure that varying input levels do not affect the sequence of events except at

129

Chapter 7: Introduction to Sequential Circuits

snapshot times when the pulse occurs. If the level inputs are used in forming the output, their fluctuations will influence that output. If the output levels are only of concern at pulse time (which is most frequently the case), then the inputs will have stabilized in time and can be used in conjunction with the internal memory states to form the output. However, if a level output is used between clock pulses, it would have to be formed from internal states only (unless the inputs were well behaved, in which case we should probably be using fundamental mode techniques). If the output is a pulse, then both the pulse input and level inputs may be used in the design of the output circuit. (The level inputs must always be guaranteed to stabilize by the time the pulse occurs as a basic condition for the case of pulse mode operation.) Once again, if the output can be made a function of the input states as well as the internal states, we can develop a design with fewer internal states. Level outputs that are a function of the internal state only can be specified using a single column in the output section. This column is given a heading of N (for Null) and is used to hold the level outputs that are to exist during the time between pulses. With counters, the output will be levels indicating the pulse count. In clocked circuits, these levels are generally not guaranteed to have settled down until the next clock pulse comes along. If the output is a pulse output or a level output that is required only at pulse time, it may be formed as a function of the total state, and the output section must be expanded to permit a column for each input state. Figure 7.14 shows a transition table corresponding to the state graph in Figure 7.7. The level outputs z1 and z2 are entered in the column headed N, and the pulse output z3 is made a function of the input variable x. Note that with pulse mode, all states are stable no matter what happens to the circuit input variables. The result is that when a circuit is between pulses, the input states may vary erratically and the actual horizontal position in the transition table is indeterminable. However, before the next clock pulse comes along, the inputs will have settled into one specific column. The intersection of that column with the row representing the state the circuit is in contains the next state entry. That cell acts very much like a transition cell in the Fundamental Mode Transition Table. When the pulse occurs, the circuit will move (vertically) to the next state specified by that cell. After reaching the row corresponding to that state, it will stay in that row until the next pulse occurs. However, it will not necessarily remain in that column, as the actual inputs may be changing. Current State Transition Table z z Next State 1 2 Input (x) 0 a b 1 c N 00 z3 Input (x) 0 0 1 0

Figure 7.14. Transition Table Corresponding to Figure 7.7 Figure 7.15 shows the transition table corresponding to the State Graph in Figure 7.8. Again, level outputs are placed in the column headed N. Since both inputs are pulses, the circuit will remain in the state it is in until either p1 or p2 occurs. There is no need for a 00

130

Chapter 7: Introduction to Sequential Circuits

column, since that is the stable condition and the circuit remains in the same state until a pulse occurs. There is also no need for a 11 column, since two inputs pulses cannot occur simultaneously. The pulse outputs are functions of the total state and the 01 and 10 columns for the p1 or p2 are included in the output section. Transition Table Current State Next State Input (p 1 p2 ) 01 a b 10 c N 00 z 1 z2 z3 z4 Input ( p1 p2 ) 01 0 10 0

Figure 7.15. Transition Table Corresponding to Figure 7.8 The transition table for Sample Problem No. 2 is shown in Figure 7.16. It is the nature of pulse mode circuits that the output section of the transition table is more completely specified than it is for Fundamental Mode tables.
Transition Table Current State Next State Input (x) 0 a b c d b b a a 1 d c d d z Input (x) 0 0 0 0 0 1 0 0 1 0

Figure 7.16. Transition Table for Sample Problem No. 2


Problem 7.4. Set up transition tables for the circuits specified in Problem 7.2. 7.6. Transition Tables Viewed Dynamically We have designed transition tables for both fundamental mode and pulse mode. It is desirable now to follow the action in the transition table as the input signals to the circuit change. The operation is different in subtle ways for the two modes of operation. 7.6.1. Fundamental Mode First, consider only the transition section of the transition table. In Fundamental Mode, the total state of the circuit is represented by both the row (internal state) and column (input state). This means that the current condition of the circuit is represented by a particular cell. To view the operation, consider the circuit to be in some particular total state. Find the cell in the transition table that represents that state. Then consider what happens when the signals change. In a fundamental mode circuit, only one signal will change at any time and the circuit will always quiesce before the next signal changes. This leads to the statement In a transition table, the first motion is always horizontal from the current column

131

Chapter 7: Introduction to Sequential Circuits

to the column which represents the next signal (see Figure 7.3). The motion in the transition section is the equivalent of the circuit sensing the change in input signals. In a primitive transition table, the new cell will correspond to a transition state and the contents of that cell will determine the row (and cell) to which the action flows. This leads to the statement After moving horizontally to the column representing the new input signal, the motion will now be vertically to the row (state) stipulated by the contents of that cell. In the primitive transition table, this will be a stable state where the circuit waits for the next change in the input signals, after which the horizontal and vertical motions again take place. In Chapter 8, we will find that when transition tables reduce, a transition state may become a stable state, in which case the horizontal motion is not followed by any vertical motion since that cell contains the same state as the original cell (same row). Also, when state assignments are made, it is sometimes desirable to patch up a fundamental mode transition table to prevent race conditions (Chapter 9). This may result in a second vertical transition to reach the final stable state. In Fundamental Mode, all conditions in the circuit are represented by the cells in the transition table. The motion between cells in the output section exactly parallels the motion in the transition section, and the output signal can be easily traced as well. 7.6.2. Pulse Mode Pulse mode operation is very similar to Fundamental Mode. The primary difference is that any number of level-type input signals may change during the time between pulses. In many ways, this is a much simpler operation. Whereas in Fundamental Mode the circuit will quiesce to a particular cell, in pulse mode it is best to think of the circuit quiescing to somewhere in a particular row. We dont care which columns the actual circuit is in when there are no pulses present. However, when a pulse occurs, the column is important. We observe which cell represents the internal state (row) and which column represents the input signal (column) when the pulse occurs. The contents of the cell in the transition portion of the transition table tells us which row the circuit will be in when the next pulse arrives. Again, we are not concerned with which states (or rows) the circuit may pass through during the interim. We are simply guaranteed that when the next pulse arrives we will be in the appropriate row. In a pulse mode circuit, the output pulses occur coincident with the pulses that drive the circuit into new states. In the ouput section of the transition table, observe the contents of the cell at the intersection of the column representing the input state at pulse time and the row representing the internal state of the circuit at pulse time. If it contains a 1, an output pulse will occur coincident with the input pulse; otherwise, the output remains at 0. Problem 7.5. For the transition table in Figure 7.12., trace the output signal as a function of time if the sequence of the input signals is as follows: (Begin in state a and show the state the circuit is in on the diagram.) a. 00-01-11-10-11-10-11. b. 00-10-11-01-00-01-11. c. 00-01-00-10-11-01-00.
Problem 7.6. For the transition table in Figure 7.16., trace the output signal as a function of time if the input signal takes on values at pulse time as follows: (Assume the circuit starts in state a. Show the state the circuit is in on the diagram.) a. 0-1-0-1-1-1-1

132

Chapter 7: Introduction to Sequential Circuits

b. 1-0-0-1-1-0-0 c. 0-0-0-0-0-1-1 7.7. Additional Problems for Chapter 7 NOTE: Draw state graphs and set up transition tables wherever a design is implied. Problem 7.7. A fundamental mode circuit with 2 inputs and 1 output is to be designed. The circuit is to produce a 1 output if and only if both inputs are high as a result of the following input sequence. (Note: the circuit continues to operate the same after an output.) x 1

x2 z Problem 7.8. A fundamental mode circuit with 2 inputs and 1 output is to be designed. The circuit is to produce a 1 output if and only if the following sequence has occurred at the inputs. (Note: the circuit continues to operate the same after an output.)

x1 x 2

z Problem 7.9. A.fundamental mode circuit with 2 inputs and 1 output is to be designed. The circuit is to produce a 1 output if and only if the sequence at the inputs has been as follows. (Note: the circuit continues to operate the same after an output.) x1 x2 z
Problem 7.10. A fundamental mode circuit with 2 inputs and 1 output is to be designed. The circuit is to produce a 1 output if and only if both inputs are high as a result of the following input sequence. (Note: the circuit continues to operate the same after an output.)

x2 z

133

Chapter 7: Introduction to Sequential Circuits

Problem 7.11. A fundamental mode circuit with 2 inputs and 1 output is to be designed. The circuit is to produce a 1 output if and only if both inputs are high as a result of the following input sequence. (Note: the circuit continues to operate the same after an output.)

x1 x2 z
Problem 7.12. A fundamental mode circuit with 2 inputs and 1 output is to recognize either (both) of the following sequences, putting out a 1 if and only if either has occurred and going to 0 with the first change in signal thereafter. sequence #1 sequence #2
x 1 x2 0 0 0 1 0 1 0 0 x 1 x2 0 1 1 0 0 0 1 1

Problem 7.13. A fundamental mode circuit with 2 inputs and 1 output is to recognize either (both) of the following sequences, putting out a 1 if and only if either has occurred and going to 0 with the first change in signal thereafter. sequence #1 sequence #2
x 1 x2 0 0 1 1 0 0 1 1 0 0 x 1 x2 0 1 0 1 1 1 1 1

Problem 7.14. The input to a fundamental mode circuit consists of three bits (labeled x,y,z). The input variables cannot take on the decimal equivalents of 0,1, and 5, and only adjacent signals can occur at the input. The circuit is to provide (on a continuing basis) an output for either of the sequences (in decimal) of: a. 2-3-2-6-4 b. 2-6-4-6-7 Problem 7.15. A circuit with a single level input and a clock input is to be designed. The circuit is to produce a level output of 1 if and only if sequence on the x input occurs as below. The output is to go to 0 on the next pulse after the output becomes 1. (Note: the circuit continues to operate the same after an output.)

x p z

134

Chapter 7: Introduction to Sequential Circuits

Problem 7.16. A pulse mode circuit is to check the value on a single level-type input at clock pulse time. It is to put out a pulse if the following sequence of values has occurred: x sequence = 0 1 1 0 1 1. The output pulse is to be coincident with the clock pulse when the last 1 in the sequence has been detected. Problem 7.17. A circuit with two level inputs and a clock pulse input is to be designed. The circuit is to produce a level output of 1 if and only if the sequence on the two lines is as shown. The output is to go to 0 on the next pulse after the output becomes 1. (Note: the circuit continues to operate the same after an output.) x1
x2 p z

Problem 7.18. A circuit with two level inputs and a clock pulse input is to be designed. The circuit is to produce an output pulse if and only if the sequence below occurs. The output pulse is to be coincident with the last clock pulse of the successful sequence. (Note: the circuit continues to operate the same after an output.) x1

x p

z Problem 7.19. A circuit is to be designed that has two level inputs (x1, x2) and a synchronizing clock pulse. It is to have a pulse output (synchronized with a clock pulse) if and only if both inputs have been low for at least two cycles (clock pulses) and the subsequent conditions on the inputs are either: a. (0,1) followed by (1,1) or two (1,0) conditions in a row. Problem 7.20. A circuit is to be designed to sample a voltage on an input line at specific intervals of time. If the voltage is lower than five volts for two consecutive samples and then higher than five volts for two consecutive samples, an output is to occur with the sampling pulse at the time that the second high value is detected. Problem 7.21. A circuit is to sample the input of an RS-232 line and detect if a "control y" input has occurred. Its output is to go high and remain high for one clock pulse. (The ASCII code for control y is given in Appendix IV as the control code EM. Assume the least significant bit appears first, and that the circuit will always be reset to state "a" before the first bit appears.)

135

Chapter 7: Introduction to Sequential Circuits

Problem 7.22. A circuit is to be designed that will have two input pulses (p1 and p2) and a single level output (z). It is to have a level output of 1 as soon as the pulse sequence (p1,p2,p1,p2) occurs, going to 0 when the next (either) pulse appears. It is to detect all occurrences of that sequence. Problem 7.23. Excess-3 code has certain advantages over binary-coded decimal when working with decimal equivalent numbers. A circuit is to be designed that will receive the four bits of an excess-3 coded number, and produce a pulse output if an invalid Excess-3 code has been received. Consider: a. The case where "bit" times are not available, but that the circuit will be reset to state "a" for the beginning of each number. b. The case where "bit" times are available. ("Bit" times means that there is a p1 pulse for the least significant bit, a p2 pulse for the next bit, etc. In this case, there are four different pulses available for input to the circuit as separate inputs.) Note: In Excess-3 code, the binary numbers 3 through 12 represent the decimal numbers 0 through 9, respectively. (See Appendix IV.) Problem 7.24. A circuit is to be designed that will have three pulses input (p1, p2, and p3) and a single level output (z). It is to have a level output of 1 as soon as the pulse sequence (p1,p3,p3,p2,p2) occurs, going to 0 when the next pulse occurs. It is to detect all occurrences of that sequence.

136

Chapter 8: Minimization of Transition Tables

8. Minimization of Transition Tables


8.1. Introduction In the previous chapter, primitive state graphs and transition tables were formed from word specifications. In this chapter, the transition table will be used to represent the specified sequential circuit. In order to develop a more reliable and more economical circuit, the next step in design is to find a minimum row (and therefore minimum state) transition table that will perform the same function as the primitive transition table. All transition tables that produce the same output sequences for the same input sequences are said to be equivalent (called equivalent covers). The problem is to find a transition table with the fewest number of rows (called a minimal cover) that is equivalent to the primitive transition table. There are two basic methods available for finding an equivalent minimum row transition table. The Huffman-Mealy Method The Compatible Pairs Method With each of these methods, there are two important concepts involved. Coverage Closure Coverage implies that each state in the primitive transition table, especially with respect to its output specification, must be included in (covered by) the resulting table. Closure implies that all of the transition specifications of the primitive system must be met. In both methods, coverage is accomplished by using the primitive states to build superstates that will clearly contain all of the primitive states. It is interesting that any given primitive state may be found in more than one superstate. With the Huffman-Mealy method, superstates called output-class sets (oc-sets) are generated at the beginning to contain the largest possible number of primitive states consistent with their output requirements. A closure test is then performed and, if closure is not possible, the oc-sets are split into smaller oc-sets, again keeping them as large as possible. This closure test and splitting action continues until the resultant oc-sets demonstrate closure. Through the splitting action, the original primitive states may appear in more than one oc-set; a last test is required to see if any of the superstates can be removed without destroying the equivalent action. The compatible pairs method allows the development of maximal compatible sets (called m-sets) through an algorithm that evaluates the states in a pair-wise fashion. The resultant m-sets are the equivalent of the final oc-sets of the Huffman-Mealy method. The developed m-sets are guaranteed to have coverage and closure. Again, a final step is required to see if all the m-sets are required. The final step for both of these methods is quite simple for "input restricted" systems - that is, if the don't cares in the next state section of the transition table result from constraints on the input signals of the type that exists with fundamental mode. If this is not the case, the Grasselli-Luccio procedure provides a tabular method for guaranteeing a minimal state system.

137

Chapter 8: Minimization of Transition Tables

The Huffman-Mealy method produces results in those cases where the transition table is not too large and when the output section of the transition table is fully or nearly fully specified. However, it is cumbersome with large tables or when there are a large number of don't cares in the output section of the transition table. The compatible pairs method is a bit longer for small problems, but is not as sensitive to the size of the transition table or the number of don't cares in the output section as is the Huffman-Mealy method. It is also somewhat less dependent on the observational talents of the user. The Huffman-Mealy method is presented first since it is a bit easier to grasp conceptually, and because it will help the student understand equivalent covers in sequential circuits. This is followed by presentations of the compatible pairs method and the GrasselliLuccio procedure. 8.2. The Conflict Resolution Operator and Algorithm Before presenting these methods in detail, a new set operator, the Conflict Resolution Operator, is introduced. This operator is a binary operator since it has two operands. The operands are sets of sets and the result of the operation is a set of sets. 8.3. Set Division By Exclusion Consider the situation where a given set contains, among many other elements, two or more elements with conflicting values in some attribute. Assume we would like to split the set into a minimum number of subsets with non-conflicting values in that attribute in such a way that the original conflicting elements are in separate subsets and, furthermore, each subset contains all of the elements from the original set that are not in conflict with respect to that attribute. The resultant set of subsets will, of course, still cover (contain all the elements of ) the original set. We will call this operation the conflict resolution operation. In order to develop an algebraic expression for the operation, we shall introduce an operation called setdivision-by-exclusion, using the / symbol as the operator. (Since this is the only set-division we will be defining, it will be referred to here simply as set division.) Let A, B, -----, Q and R be sets We define the set division operator (/) by the following equivalence relation: Definition 8.1. {R / {A, B, ---, Q}} = {R I A , R I B , ---, R I Q } We shall call {A, B, ---, Q} the exclusion set. We then extend the operation by allowing R to be a set of sets, giving: i = 1, n {{Ri} / {A, B, ---, Q}} = {{Ri} I A , {Ri} I B , ---, {Ri} I Q } = {R1 I A , R2 I A , ---, Rn I A , R1 I B ,--- , Rn I Q } We now permit some corruption in notation in order to simplify the algebraic manipulation. R/(ac,eg) will be taken to imply that: ac is the set {a,c} and that eg is the set {e,g} Note that ac is really a U c so that R I ac is really R I a I c . It will be easier to recognize that the sets produced in the division process will be one subset of R with the elements a and c removed (if they are in R), and one subset of R with the elements e and g removed (if they are in R). So we have: R/(ac,eg) = {R I a I c , R I e I g }
138

Chapter 8: Minimization of Transition Tables

In viewing the properties of the operator, we note: 1. The operator is not commutative in that each set (in the set of sets) on the left is always taken as intersecting with the complement of each of the sets in the set of sets on the right. 2. Within each set of sets, the sets commute since the position is not important to the result. In general use, there will be several operations concatenated and we need to examine the properties of the resultant concatenation. Consider the following: {R/(A,B)}/(C,D) = {R I A , R I B }/(C,D) = {R I A I C , R I A I D , R I B I C , R I B I D } which we may denote as: = R/(AC, AD, BC, BD) We see that the concatenation of operations results in the cross product of the element terms in our corrupted notation. Since the cross product results regardless of order, we may say that the right hand concatenated sets commute and write: {R/(A,B)}/(C,D) = {R/(C,D)}/(A,B) and note that it is unambiguous to write R/(A,B)/(C,D) (omitting the curly brackets). We dignify the above operation in its corrupted notation: Theorem 8.1. R/(A,B)/(C,D) = R/(AC, AD, BC, BD) 8.3.1. Conflict Sets and Exclusion Sets In this text, the set division operator will be used for conflict resolution. Conflicting sets will be enclosed in square brackets and will be called the conflict set. If there are only two sets in conflict, e.g., [a,b] and with a U b = R, then the conflict sets are the same as the exclusion sets, e.g., (a,b). However, if there are more than two conflict sets, e.g., [a,b,c], then the exclusion set will contain one exclusion subset for each conflict set consisting of the union of the other conflict sets, e.g., (bc, ac, ab). In many instances (where there are only two sets in conflict), the conflict set and the exclusion set will be the same. However, if there are more than two conflict sets, another step is required to obtain the exclusion set from the conflict set. 8.3.2. Conflict Resolution We will be dealing with problems where a set contains elements that conflict in the value of some attribute. We will not only break that set into subsets that will contain all elements that do not conflict but, further, we will only be interested in the largest sets that cover the original set. Parentheses are retained to denote the exclusion set and square brackets will be used to hold the conflicting sets. Let A be a set of elements with one value of a binary-valued attribute. Let B be a set of elements with the other value. Let Z be a set of elements with a "don't care" value for that attribute. Let R be a subset of A U B U Z. Then we define the conflict resolution operation as R/[A,B] = R/(A,B) = {R I A , R I B }, as before. Let C be a subset of R with one value of another attribute. Let D be a subset of R with the other value of this other attribute.

139

Chapter 8: Minimization of Transition Tables

Then R / [A,B] / [C,D] = R / (AC, AD, BC, BD), again as before, where we have eliminated the step converting the original conflict sets to exclusion sets, since they only have two subsets in each conflict set. The set division operator will ensure that conflicting elements will never appear in the same set within the resultant set of sets, since once the sets in R have been divided, there is no mechanism for reassembly. However, since we are interested only in the largest sets, it is possible that some of the resultant sets can be dropped. Note that A and B will be mutually exclusive, and so will C and D. However, it may very well be that C or D will be subsets of A or B. Consider the case where C is a proper subset of A. Then, in our corrupted notation, AC is actually AUC and since C is a subset of A, AC = A. Therefore, if C A, R / [A,B] / [C,D] = R / (A, AD, BC, BD) However, R I A I D is a subset of R I A and can be dropped from the resultant set of sets. Therefore, Theorem 8.2. If CA, then R / [A,B] / [C,D] = R / (A, BC, BD) The result of this theorem is quite simple in our corrupted notation, since it means that all subsuming terms can be dropped from the exclusion set. The case for C = A is even simpler, yielding Theorem 8.3. R / [A,B] / [A,D] = R / (A, BD). Theorem 8.4. If C A and D B, then R / [A,B] / [C,D] = R / (A,B). 8.3.3. Conflict Resolution Algorithm There also exists a paper and pencil procedure for the development of the conflictfree sets. It is referred to here as the conflict resolution algorithm, and presented through an example. The respective action with the conflict resolution operator will be carried as a note on the right. Conflict Resolution Algorithm Conflict Resolution Operator Original set Original set {a,b,c,d,e,f,g} R Assume b and d conflict with e and g in some attribute where the others are don't cares. We now underline b and d and cross off e and g (underlining and crossing are simply ways of showing which sets conflict with which sets.) R / [bd,eg] {a,b,c,d,e,f,g} We break the set into two sets, one with all the elements except those that are crossed off and the other with all elements except those that are underlined. {{a,b,c,d,f}, {a,c,e,f,g}} R I e I g, R I b I d If there is now a conflict between {c,d} and f, we continue in exactly the same way. R / (bd,eg) / [cd,f] {{a,b,c,d,f}, {a,c,e,f,g}} {R I e I g , R I b I d } / (cd,f) giving {{a,b,c,d}, {a,b,f}, {a,c,e,g}, {a,e,f,g}} R / (bcd, bdf, cdeg, feg) {R I e I f I g , R I c I d I e I g , R I b I d I f , R I b I c I d } Since we will be working with elements described by single letters, the above process, regardless of method used, may be denoted in operational terms as: {abcdef} / [bd,eg] / [cd,f]
140

Chapter 8: Minimization of Transition Tables

The above examples have been concerned with sets whose elements have binaryvalued attributes. The operation can be extended for use with elements with any number of values. Assume that there is an attribute with three values, and that R contains elements with all three values plus elements which have don't cares with respect to that attribute. Assume that a and b, which do not conflict with each other, conflict with c, e and g, that c also conflicts with e and g, and that e and g do not conflict with each other. Then we may write, R / [ab, c, eg] {a b c d e f g} This will convert to the standard parentheses notation, except that two of the three sets internal to the brackets must be omitted from each set, giving R / (ceg, abeg, abc) {a b d f, c d f, d e f g} If we have multiple conflict sets, then once the operation is in terms of exclusion sets, the standard cross product terms can be generated, and the operation proceeds as with systems with only two sets in each conflict set. Problem 8.1. Evaluate the following operations using both the algebraic method (conflict resolution operator), and the conflict resolution algorithm. a. {abcdefg} / [abc, def] b. {abcdefg} / [a,cd] / [b,de] / [e,g] c. {abcdefg} / [a,bc] / [a,cd] / [b,ef] / [c,ef] d. {abcdefg} / [ab,cd] / [abc,df] / [cd,eg] e. {abcdefg} / [abe,df] / [ab,d] / [be,df] / [c,f] f. {abcdefg} / [a,b,de] / [ab,cd,g] / [ac,bd,ef] 8.4. Huffman-Mealy State Minimization The Huffman-Mealy state minimization procedure is an iterative trial error scheme, that focuses on the output section of the transition table. If the output is a function only of the internal system states, or if there are not very many don't cares in the output section, the Huffman-Mealy method is quite efficient in determining the minimal state transition table. If there are a large number of internal states or a large number of don't cares in the output section, the procedure for the Huffman-Mealy method is cumbersome and the compatible pairs method is preferred. The implementation presented here is not the same as that presented by others, but it has several advantages when don't cares occur in the output section. It provides a clearer picture about what is taking place in the process and also provides a procedure that can be used to advantage in the last stage of the compatible pairs method. If we examine the output section of a transition table and view each row as an output vector, called an output class, we see that certainly a lower bound on the number of states would be the number of different output classes, since these represent the number of different output situations the circuit must provide. The Huffman-Mealy procedure begins by constructing superstates based on the output classes required by the transition table. Each superstate is formed by collecting all primitive states whose output class would allow it to be placed in the set. A closure test is performed to see if an equivalent transition table can be constructed with this (lowest bound) number of states. If it can, the procedure is over. If not, then a second trial is established

141

Chapter 8: Minimization of Transition Tables

consisting of more states where the conflicting transitions are used to define the new states. The process continues until an equivalent transition table can be constructed. The procedure can be outlined in four steps. The first step is a trivial reduction to eliminate some of the effort in the trial and error search. Step 1. If two rows of the transition table are exactly the same all the way across (both next state and output sections), delete one of them and replace all references to the deleted row with a reference to the row that was kept. Step 2. Examine the output section and group the states into sets where all members of a given set may have the same output vector. A state with an output that has don't cares must be grouped with each set of which it can possibly be a member. That is, a state with don't cares may appear in more than one set. The output vector that determines the set will be referred to as the output class. Each set thus constructed is now thought of as a superstate and given a state name. Note: If the output section of the transition table is fully specified, this step is most easily accomplished by grouping the states by output class. However, if there are don't cares in the output section, this step is best performed by applying the conflict resolution operator, where the conflicts are determined by examining each column in the output section of the transition table. Step 3. The closure test. Test for the equivalence property by constructing the equivalent of a new transition table where each primitive state in each set (or superstate) is tested to see if its transition requirements are compatible with all the other states in that set. (If a state has multiple coverage, it must be compatible in only one set. If it is compatible in one set, it can be removed from any set in which it is not compatible.) Step 4. If the equivalence cannot be realized, then those sets in which an incompatibility existed are broken into sets that contain all possible compatible states and Step 3 is repeated. The best method for forming the new sets of states is once again the application of the conflict resolution operation. If the equivalence can be realized, the procedure is completed by checking to see if all the superstates are needed. This is accomplished by a coverage table (like the Table of Primes) and observing if any nonessential states can be removed without destroying the closure requirement. Once the final superstates are determined, the process is completed by setting up the final transition table with a row for each required superstate. Make the next state entries those rows for which all primitive states in the set are compatible and for the output section entries, those values required by the primitive states. Final Note: In Steps 3 and 4, if there are a lot of states with multiple coverage, the process becomes cumbersome as we must check to see if any possible combination that will cover each state once is compatible. This is not impossible, but simply requires a straightforward algorithm that insures that all cases are tested.

142

Chapter 8: Minimization of Transition Tables

8.4.1. Huffman-Mealy Method and Sample Problem Consider the transition table from Figure 7.12 for sample Problem No. 1 in Chapter 7.

Transition Table Next State Output State Current Input State(x1,x2) Input State(x1,x2) State 00 01 10 11 00 01 10 11 a a b c 0 0 0 b a b d 0 0 0 c a c e 0 0 d b c d 0 0 0 e f c e 1 1 f a f e 1 1 Figure 8.1. Transition Table for Sample Problem No. 1 in Chapter 7 Following the principles, if not the exact procedure set forth by Huffman and Mealy, we group all states that can possibly be grouped into output classes. Using the conflict resolution algorithm, the set to be split is the set of all primitive states {abcdef}. Focusing on the output section, we see there is no conflict in the column for input state 00. For input state 01, we obtain conflicting states (abd,ef). There is no conflict for input state 10. And for input state 11, we have conflicting states (bd,ef). This gives {abcdef} / [abd,ef] / [bd,ef] = R / (abd, abdef, bdef, ef) = R / (abd, ef) = {{cef},{abcd}} We will name these state = {abcd} and = {cef}. Notice that primitive state c appears in both oc-sets and . We now perform a Huffman-Mealy closure test to see if it is possible to construct an equivalent transition table with states ( , ). The best way to approach this is to consider the possible options with regard to state transitions. For example, observe that any primitive table state that goes to state c can go to either superstate or . It is best at this time to make a table of where the circuit may go for each primitive state. That is, construct a table that shows all the superstates in which each primitive state may be found. State in Superstate a b , c d e f We examine mini transition tables for each superstate with respect to the primitive states that have been grouped together. The entries for the next states in the mini transition

143

Chapter 8: Minimization of Transition Tables

table are the superstate options for the relative next state entries in the primitive transition table. Output Class = (0,0,0,0) State = 00 01 10 11 a , b , c , d There is no intersection of states in Column 11 and so {a,b,c,d} will not combine. We use the conflict resolution operation to obtain the sets to be used for the second trial. {abcd} / (c,bd) = {{abd}, {ac}} We continue, performing a closure test on state . Output Class = (0,1,0,1) State = 00 01 10 11 c , , e f There is no conflict and the set will be left intact. A test is now performed on the sets that result from the use of the conflict resolution operation. We rename the states = {abd}, = {ac} and = {cef}. 00 01 10 11 a , , , a: , b , b: d c: , a , , d: , , e: c f: c , , , e , f There are no conflicts and we now know there is a three-state transition table that is equivalent to the six-state primitive transition table. The question still remains, however, are all three states required. We return now to the concept of coverage. All of the primitive states must be covered. The Table of Primes is a very useful tool for ensuring coverage. A table of superstates is now constructed in exactly the same way.

144

Chapter 8: Minimization of Transition Tables

a *a b *g X X

b X

d X

X X X X

Figure 8.2. State Coverage Table for Sample Problem No. 1 States and are required for coverage but is not. An additional closure test must be run to see if can indeed be removed. 00 01 10 11 a a: b b: d c: c d: e: e f: f We now see that and do indeed have closure and the final transition table can be constructed. The next state entries are the intersection of the next state entries of the mini table over all primitive states in that superstate. The output section is derived as those entries required for the collection of primitive states as represented in the original transition table. The final table is shown in Figure 8.3 and the associated state graph is shown in Figure 8.4. Current State State Input state(x1,x2) 00 01 10 11 Output State Input state(x1,x2) 00 01 10 11

0 0

0 1

0 0

0 1

Figure 8.3. Final Transition Table for Sample Problem No. 1

00, 01, 11 0, 0, 0

00/0 10/0

01, 10, 11 1, 0, 1

Figure 8.4. Final State Graph for Sample Problem No. 1

145

Chapter 8: Minimization of Transition Tables

A minimal state transition table has been found, but it is still necessary to check its operation with respect to the original word problem in order to make sure that an error has not occurred at some point in the design process. We see that the system stays in until the input is 10, at which time it moves to with an output of 0. State can be viewed as a "waiting" state, waiting for the input to be a 10. In state , if x2 goes high (with x1 already being high), it will stay in , but the output will become a 1. (Note this is a result of allowing the output to be a function of the total state.) If x1 goes low, the output will stay at 1. If x1 then goes high, the output remains at 1. If x2 goes low with x1 being high, the output will go to 0, but the system will stay in so that if x2 then goes high again, the output will be 1. If x2 goes low with x1 being low, then the system moves to state , to wait again for the input to be 10. Problem 8.2. Figure 8.5. contains some stripped down transition tables. Perform HuffmanMealy reductions on these tables to find minimal cover transition tables. Note that the output is not a function of the input state. Problem 8.3. Expand the the following transition tables to permit the output to be a function of the total state. Perform Huffman-Mealy reductions on these tables to find minimal cover transition tables. Transition Table Next State Input State 00 01 10 11 a b c b d e c a f d b c e b c f b c Transition Table Next State Input State 00 01 10 a b c b a c a b e e f f b e

Current State a b c d e f

Out put z 0 0 0 1 0 1

Current State a b c d e f

11 d d d d -

Out put z 0 0 0 1 1 1

Current State a b c d e f g h

Transition Table Next State Input State 00 01 10 a b c b a c e b c e g h g h g e h a -

11 d f d f f d

Out put z 0 0 0 0 0 0 0 1

Current State a b c d e f g h

Transition Table Next State Input State 00 01 10 11 a b c b a d c e f d g h e b c f g h g e d h a f

Out put z 0 0 1 1 1 0 1 0

Figure 8.5. Transition Tables for Problems 8.2 and 8.3

146

Chapter 8: Minimization of Transition Tables

8.5. The Compatible Pairs Method The compatible pairs method involves two algorithms associated with a table that brings into evidence the transitional requirements of compatibility on a pair-wise basis. There are three steps to the method for arriving at the maximum compatible sets or superstates that are guaranteed to have both coverage and closure. 1. Establish incompatibility between pairs of states based on the output vectors. 2. Establish pair compatibility constraints to meet transition or closure requirements. 3. Partition superstates to meet compatibility constraints. The algorithms applied in each step are quite simple and straightforward and do not require a great deal of experience to be reasonably error free. The algorithm for partitioning is the conflict resolution algorithm. The compatible pairs process is organized around the compatible pairs table, a simple matrix (below the major diagonal only) that provides a convenient format for notes regarding pairs of states. The algorithm will be presented and discussed with respect to Sample Problem No. 1 of Chapter 7.

Current State a b c d e f

Transition Table Next State Output State Input State (x1,x2) Input State (x1,x2) 00 01 10 11 00 01 10 11 a b c 0 0 0 a b d 0 0 0 a c e 0 0 b c d 0 0 0 f c e 1 1 a f e 1 1

Figure 8.6. Transition Table for Sample Problem No. 1 in Chapter 7


8.5.1. Establishing Conflicting Output States The first step is to examine the output section of the transition table for the conflicts that will make states incompatible. The results are shown in Figure 8.7. Comparisons of states {a,b} show no conflicts in the output vector; neither do states {a,c} or states {a,d}. States {a,e} conflict in column 01 and cannot be placed together in the same superstate. When a conflict occurs between two states, an X is placed in the right half of the square representing the intersection of the two states in the compatible pairs matrix. There is also a conflict for state {a,f}. State b is then compared with all states below it (it has already been compared with a). Conflicts are found for states {b,e} and {b,f}. State c is then compared with each state below it. No conflicts are found. State d is compared with e and f, producing a conflict in the pairs {d,e} and {d,f}. Finally e is compared with f (no conflict). In practice, it is more efficient to scan each column for conflicting pairs and mark the table accordingly.

147

Chapter 8: Minimization of Transition Tables

b c d e f X X X X a b c X X

b c d e X X e d,e d,e X

f X X X a b c d e Ouput Conflicts Closure Constraints Figure 8.7. Ouput Conflicts and Closure Constraints d

8.5.2. Determining Pair-wise Closure Constraints The second step is to find pair-wise closure constraints. This is done by repeating the first step, but this time examine the next-state portion of the transition table. If for any given column, the next states are the same, or could be the same by proper assignment to don't care cells, no entry is made in the compatible pairs table. If, in any column, the entries are different, they are entered in intersection cells of the compatible pairs matrix. (There is a substantial advantage to keep them in alphabetical order for the next step.) If the cells are incompatible from step one, (they have an X in the cell ) no entries need to be made (nor is it necessary to consider these cells). Also, if the entries are the same states as those being compared, no entry needs to be made. If, after checking all columns, no entry has been made in the cell, a check mark is entered to show complete compatibility. This means that, if the pair of states being compared are to be placed in the same superstate, then the states represented by the entries made in the matrix must also be in a superstate. This is reasonable since the transition under that input condition cannot go to two different states at the same time. For the case where the entries in the cell would be the same as the state being compared, the statement would be "if these two states are in the same superstate, then they must be in the same superstate" - obviously not adding anything. Actually, it doesn't hurt anything either, and no error of consequence occurs if you don't catch it. This process is shown in Figure 8.7. In comparing rows (states), it is noted that {a,b}, {a,c} and {a,d} are completely compatible. For {a,e}, b and f would have to be in the same superstate. However, in this case, the cell has an X in it and the entries do not need to be made. The process proceeds in exactly the same way for all pairs of states. 8.5.3. Determination of Incompatibility Based on Closure Constraints Until the last step, which is essentially a Huffman-Mealy closure test, there is no further need for the transition table, and attention focuses completely on the table of compatible pairs. The next step is to establish the effect of the original incompatibilities of the sequential or transition action. The conventional procedure is to examine the effect of each incompatible pair on the pairs with closure constraints. The algorithm is as follows: 1. Pick a cell with a single X and search the table to see if this pair appears as a constraint for the compatibility of any other pairs. If they appear, place a single X in the right half of that cell where they appear.

148

Chapter 8: Minimization of Transition Tables

2. When the table search is complete, place a second X in the cell that was picked. 3. If there is still a cell with a single X, repeat Steps 1 and 2. This algorithm is thorough and in many cases unnecessarily so. Frequently, we can focus on the cell entries and complete the process faster. However, care must be taken to ensure that all conflicts are covered. The results of this step for Sample Problem No. 1 are shown to the left in Figure 8.8.
b c d d,e X X d,e XX // {abcdef} / // {abcd} / {bcdef} {abd} {acd} / / {bd} {cdef} // {abd} {ac} {ad} {cef} {def} {abd} {ac} {cef} {d} {ef} {abd} {ac} {cef} Developing M-Sets

e X X XX XX f X XX X XX a b c d e Completed Closure Constraints.

Figure 8.8. Developing M-Sets from Completed Closure Constraints


8.5.4. Determination Of Maximal Compatible Sets The m-sets can now be found using the conflict resolution algorithm. This can be accomplished as follows (observing conflicts by columns left to right). {abcdef}/(a,ef)/(b,cef)/(c,d)/(d,ef) = {abcdef} / [a,ef] / [b,ef] / [d,ef] / [b,c] / [c,d] = {abcdef}/[abd,ef]/[c,bd] = {abcdef}/(abcd,abd,cef,bdef) = {abcdef}/(abd,cef,bdef) = {{cef}, {abd}, {ac}} where the previously defined properties of the operator have been used to simplify the subscript determination. The conflict resolution algorithm can also be used. We start with all primitive states in a single set and then continue by splitting the set by observing the conflicts one column (state) at a time, starting on the left. (Actually, where we start is immaterial since the conflict resolution operator is commutative with respect to conflicting sets.) This particular situation allows us to restate the conflict resolution algorithm slightly. That restatement is presented here, partially to reinforce the algorithm and partially because it is the method generally presented by other authors. The algorithm begins assuming that all states might possibly go together into one superstate and so are placed in parentheses as a single state (see Developing M-Sets in Figure 8.8). Now each column is tested in turn in accordance with the following algorithm: 1. Underline within the parentheses the column being tested (to help in bookkeeping; start at the left.) 2. Each cell with two x's represents the intersection of incompatible states. Cross off within parentheses all states that are incompatible with the state representing the column under the test. 3. If there have been any states crossed off inside parentheses, then the states within the parentheses are split into two states as follows: a. One state containing all states that have not been crossed off.

149

Chapter 8: Minimization of Transition Tables

b. One state containing all of the states except the state underlined. c. If any collection formed is a subset of any other set in parentheses, it is deleted. 4. Repeat Steps 1, 2, and 3 (moving one column to the right each time) until all columns have been processed. The resultant sets are the maximal compatible sets (m-sets). They are guaranteed to exhibit coverage and closure. The m-sets must be tested to see if all are required. This step is identical to the one following the Huffman-Mealy procedure. As a matter of fact, we must still effectively carry out the Huffman-Mealy closure test to obtain the transition table. The results will have an intersection in the tested columns. If they don't, a mistake has been made at some point. The transition table is determined from the intersection of the tested columns. Problem 8.4. Perform a compatible sets analysis on the expanded tables of Problem 8.3. Problem 8.5. Using the method of your choice, find the minimum state graphs for the stripped-down pulse mode transition tables that follow.
0 a b c d a c a c 1 b d b d a. 0 b b e g f g d h 1 c d f c a d a b c. n 0 1 0 0 1 0 1 1 0 a a b c d e f g h i j k l 0 0 0 1 0 1 0 1 0 0 a b c d 01 a c d a 10 01 10 b 0 0 0 b 0 0 b 0 b 0 1 b. 1 b c d e f g h i j k l m m d. n 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100

a b c d e f g h

a b c d e f g h i j k l m

8.6. The Grasselli-Luccio Method The Grasselli-Luccio procedure for finding the minimum row flow graph consists of two steps. Find the prime compatible sets.

150

Chapter 8: Minimization of Transition Tables

Select a set of primes that will produce both coverage and closure with a minimum number of compatible sets. Normally, the process starts with the m-sets derived using either the Huffman-Mealy method or the compatible pairs method. The m-sets will always provide a system that satisifies both coverage and closure. However, if there are don't cares of the non-inputrestricted variety in the next state portion of the transition table, the m-sets may have multiple coverage that would permit further reduction in the number of states required. We may use the coverage table to determine if all m-sets are necessary. If they are not, the table will also indicate which m-sets are candidates for removal and the Huffman-Mealy closure test can be used to see if they can be removed. The problem occurs when they cannot be removed, maintaining the m-sets as developed, but that some further reduction may be possible if some of the multiple-covered states are removed from some of the m-sets. Without the GrasselliLuccio method, the only approach that would guarantee a minimum state system would be an exhaustive examination of every possible combination over complete covers. The GrasselliLuccio approach provides a method that considers all the combinations but reduces the amount of work by discounting, early in the work, those combinations which have nothing to offer. To begin with, each m-set represents a collection of primitive states that are compatible both in their output and in their transition to other states. Any subset of the msets is also compatible. Any subset of an m-set is called a compatible set, abbreviated c-set. Using this new term, the problem is to find a set of c-sets that will provide coverage and closure with the fewest number of c-sets. The first setup of the Grasselli-Luccio procedure is to determine those c-sets, called prime c-sets, that are reasonable candidates for the second step. The second step results in the selection of those c-sets which will yield a minimum state system. The tabular-tree approach used here is not the same bookkeeping scheme presented by Grasselli and Luccio, but serves the purpose as well and fits in better, conceptually, with material already presented. 8.7. The Search for Prime C-Sets Consider now the pulse mode transition table in Figure 8.9, with its accompanying compatible pairs and m-sets. There are four m-sets, and we are assured of being able to develop an equivalent transition table with four rows. We proceed with a Grasselli-Luccio search for prime c-sets. The student will need to follow the process and the reasoning in Figure 8.10. This table could be set up to contain all possible compatible sets for analysis but, since some of them do not need to be analyzed, they are not included. It is advantageous to place c-sets in the table in order of decreasing size (number of primitive states). Thus, {bcdf} and {cdef} are entered first. Since {abc} and {ace} each contain three states, they are entered next. Generally, the m-sets are added to the table as the first entries when their size group starts. The of-set column will contain the numbers of all the prime c-sets of which this c-set is a proper subset. Since the m-sets are not proper subsets of any c-set, there can be no entry for them. The next columns represent the next state vector for each input state. (This next state vector is ordered for ease in setting up the vectors for subsets.) The entries are taken directly from the primitive transition table for the state that are members of the m-set. By

151

Chapter 8: Minimization of Transition Tables

keeping them ordered with respect to the primitive states in the c-set column, subset next state vectors can be derived from these without going back to the transition table.
Transition Table Current State a b c d e f Next State Input (x) 0 b d d f f 1 c a e c c 0 0 1 1 z Input (x) 1 1 0 -

a,c c b,d c,e a,e d X X a,c c,e e b,f X X d,f d,f f X X a,c d,f c,e a b c d b

{abcdef} / / {abce} / {bcdef} /

{abc} {ace} {bcdf} {cdef} also (or) e R / [a,df] / [b,e] = R / (ab, ae, bdf, def)

Figure 8.9. Sample Problem for Grasselli-Luccio Analysis The implied class set column contains the sets of primitive states (in alphabetical order to ease the scanning process to be discussed shortly) that must be in a c-set if this c-set is to be used. They are taken from the next state vectors. There are two exceptions: 1. If the primitive states in a next state vector are all present in c-set representing the row being analyzed, they do not represent an additional constraint and are not considered as a set in the implied class set. 2. A single state is not considered an implied class set. (The c-set selection process will ensure coverage.) With c-set {bcdf}, under x = 0, all primitive states which do not have don't cares go to states d or f. Since d and f are both in the same c-set, then by 1 above, {bcdf} does not represent an implied class set. Under x = 1, b goes to a, c to e, d to c and f to c. The next state vector (aecc) produces the implied class set {ace}. That is to say, there must exist a c-set containing primitive states a, c and e if the state {bcdf} is to be used, and the superstate {bcdf} will transfer to this state under input condition x = 1. There is a great similarity between the next state vectors in the Grasselli-Luccio Analysis and the closure tables used in the Huffman-Mealy reduction procedure. All m-sets are prime and a number is issued to the prime c-set.

152

Chapter 8: Minimization of Transition Tables

If we examine the next state vectors for {cdef} we see that the first next state vector (ddff) requires {df} which is a subset of {cdef}. This means that the state graph would loop back to the same state under input condition x = 0. Similarly, for x = 1, {ec} is a subset of {cdef}. Thus, the implied class set is the null set . This process continues for {abc} and {ace} producing implied class sets as shown. All c-sets up to this point have been m-sets and thus are prime. If the implied class sets are not m-sets or subsets of m-sets, then a mistake has been made, since the m-sets by themselves can always be used to form an equivalent state graph or transition table. We must break the m-sets into subsets and examine them for primeness. We will see that, whereas the algorithm for breaking the m-sets into subsets must be capable of producing all possible subsets (unordered), we will not actually have to analyze each one, since some will obviously not be prime. We now form all subsets of size three from {bcdf} to test for primeness. What we have is not a rule that directly establishes primeness but rather a rule that directly establishes non-primeness. The rule is this: If elimination of a primitive state from a prime c-set does not improve, or at least change, the closure requirement in some way which has a promise of being beneficial, then the subset is not a prime c-set. As we move downward in the table, we see that c-sets may well be subsets of more than one prime c-set, and the test must be performed against each prime c-set of which it is a subset. This rule can be stated more precisely: If a c-set B is a subset of prime c-set A, then if the implied-class set (of c-sets) of c-set A is a subset of the implied class set (of c-sets) of c-set B, then c-set B is not prime. For example, consider c-set {bcd} in Figure 8.10. It is a subset of prime c-set 1 {bcdf}. The implied class set of {bcdf} consists of one c-set, and this c-set is a subset (it doesn't have to be a proper subset) of the set of c-sets that form the implied class set for bcd. Hence, {bcd} is not a prime c-set. In this case, the implied class sets are identical. This means that removing the primitive state f from {bcdf} has not resulted in any improvement in implied class set constraints. To remove it would be counter-productive, since any other cset that requires a c-set with both f and b together would not be able to use the resultant c-set for closure. It is advantageous for checking results at a later time to circle the number of the prime c-set responsible for non-primeness.

153

Chapter 8: Minimization of Transition Tables

Prime c-set Number 1 2 3 4 5 6

c-set bcdf cdef abc ace bcd bcf bdf cdf bc bd bf cd cf df ab ac ae ce a b

of-set 1 1 1 1,2 1,3 1,5 1,5 1,2 1,2 1,2,5 3 3,4 4 2,4 3,4,7,8 1,3,5,6,7

Next State Vector x=0 (-ddf) (dddf) (b-d) (bdf) (-dd) (-df) (-df) (-d) (-d) (-f) x=1 (aecc) (ec-c) (cae) (ce-) (aec) (aec) (acc) (ae) (ac) (ac)

Implied Class Sets {ace} {} {bd}{ace} {bdf} {ace} {df}{ace} {ac} {ae} {ac} {ac}

7 8 9 10 11

(b-) (bd) (bf) (b) (-)

(ca) (ce) (c-) (c) (a)

{ac} {bd}{ce} {bf}

{} {} Figure 8.10. Grasselli-Luccio Search for Prime c-sets

A similar problem occurs with {bcf}, which is also not prime. Actually, in this case, the removal of primitive state d has increased the number of c-sets required for transition from the state. The implied class set for {bdf} is {ac}. This represents a decrease in the constraint of the implied class set and, since {bdf} is a subset of {bcdf} only, it is prime, and the next prime number is issued. C-set {cdf} is a subset of both prime c-set {bcdf} and prime c-set {cdef}. By {bcdf}, it could be prime. However, the null set for {cdef} is a subset of any set and thus {cdf} is not prime by prime No. 2. If a c-set has a null set for the implied class set, then none of its subsets can be prime. No subsets of {cdef} need to be developed or tested for primeness. Next to be tested are all subsets of size two from {bcdf}. Subsets of size two from {cdef} will all be non-prime and do not need to be entered. Subsets of size two from {abc} and {ace} are found and tested. Finally, all subsets of size one that are not subsets of some set with an implied class set = are tested. The table is now complete and all numbered c-sets represent the prime c-sets to be used in the second step.

154

Chapter 8: Minimization of Transition Tables

8.7.1. Selection of C-Sets Step 2 of the Grasselli-Luccio procedure provides an efficient algorithm for selecting a minimum number of c-sets that will provide both coverage and closure with respect to the original transition table. The algorithm is presented here in the form of trees which accumulate branches to meet the coverage and closure requirement. First we form a coverage table. All primitive states are across the top of the table and all the prime c-sets are down the left-hand side, as shown in Figure 8.11. a b c d e f 1 bcdf X X X X 2 cdef X X X X 3 abc X X X 4 ace X X X 5 bdf X X X 6 bc X X 7 ab X X 8 ac X X 9 ae X X 10 a X 11 b X

Figure 8.11. Coverage Table for Prime C-Sets In Step 2, there will be as many trees formed as there are X's in the first selected column. We scan the table and select the column with fewest X's in it. If there is more than one column with the least number of X's, then we can select any one of them. There is no way, other than working them out, to know which selection would require the least work. In Figure 8.11., d, e, or f could be selected. As it turns out, selecting d or f would result in considerably less work. However, we will choose e because it allows us to show more of the procedure. The algorithm for developing the trees will be as follows: 1. Coverage: From the uncovered columns, select a column with the fewest X's. (Each prime covering this column is a candidate for a minimal cover in the tree thus far developed.) Place a box containing this c-set in the branch of the tree being developed at this point. Above the box, place the primitive states remaining to be covered prior to this selection. To the right of the box, place all of the c-sets in the implied-class set of this c-set. If any of the implied class set requirements of this c-set are met through c-sets in the branch leading to this c-set, cross them off as having been met. 2. Closure: If there are implied-class set requirements that have not been met, then select one of the c-sets from the unsatisfied set and determine all c-sets that contain this set. Develop as many branches as there are prime c-sets containing this set. Again place above each box the unsatisfied primitive states from the previous branches (not from any "in parallel," only those "in series") and to the right of the box, the sets from the implied class set of this c-set. If any of these
155

Chapter 8: Minimization of Transition Tables

new sets are already contained in previous branches (in series) then cross them off. If there remain any unsatisfied c-sets, repeat Step 2, Closure, until there are no more requirements of closure. If there are no more requirements on closure, but coverage is incomplete, go back to Step 1, Coverage. If both coverage and closure are complete, that branch ends the tree, and the c-sets along the serial path will form a transition table that meets both the coverage and closure requirements. The recommended procedure is to develop the tree starting first with the c-set containing the most primitive states. When parallel branches are created, place the largest csets on the upper branches and develop them first. Rules for discontinuing development include: 1. If there are k M-sets, development can cease if k c-sets would be required. This is because we know the k M-sets can be used, and they would have the greatest flexibility in subsequent design. 2. If a set of n c-sets where n is less than k has been found, any development that would lead to n or more c-sets can be discontinued. Note that this assumes that we have worked with the largest c-sets first. We are interested in the largest c-sets that meet the coverage and closure requirements. We now turn to Figure 8.12 for the tree development of our sample problem. We begin with no closure requirement and turn to coverage. Column e is selected (d or f could also be selected). The basic tree stem requires three branches because Column e has three primes that cover it. {cdef} and {ace} and {ae} are placed in the boxes representing the branches. Since nothing has been covered prior to the selection, all states are entered above each box, crossing off the states present in the box below to imply the coverage to this point. The c-sets in the implied class set are placed to the right of each box and become the focus of the next step. Since the most efficient way to develop the trees is to work with the uppermost branches, and to keep the largest (in primitive state content) c-sets at the top, we proceed to develop the upper tree. If the c-set had any c-sets in its implied-class set, we would continue with Step 2, Closure. However, since it does not, we go back to Step 1, Coverage. Columns a and b remain to be covered. They have the same number of X's, so we arbitrarily select a. We must now generate six branches, again keeping the largest c-set at the top. We continue the procedure working with the top c-set {abc}. We bring forward the remaining coverage requirement {ab} and cross off those states covered by this c-set, in this case leaving no further coverage requirements. There are no closure requirements to bring forward, but {abc} has two c-sets in its implied class set. Further, there is no c-set that will cover both of these c-sets. This serial string can be terminated since there will be no set of c-sets numbering less than four using these c-sets as developed.

156

Chapter 8: Minimization of Transition Tables


ab // 3. abc ab / 4. ace abcdef //// 2. cdef ab // 7. ab ab / 8. ac ab / 9. ae ab / 10. a bdf /// 1. bcdf bdf /// 5. bdf {ace} {ac}

{bd} {ace} 4 or more b / 1. bcdf {bdf} b / 5. bdf {ac} 3 or more {bd} {ce} {bf} 3 or more 3 or more

{ace}

3 3 or more

3 or more

abcdef / / / 4. ace abcdef / / 9. ae

{bdf}

2 2 or more

{bf}

2 or more

Figure 8.12. Minimum C-Set Determination We drop down to the next c-set {ace}. The coverage requirement is brought forward {ab} and a is crossed off because {ace} will cover it. The implied class set for {ace} contains {bdf} as the required content of a c-set. We proceed with the closure requirement (Step 2) and see that primes 1 and 5 will meet the requirement. The two branches are added to the tree. Keeping to the top, we continue to develop {bcdf}. Coverage is now complete, and the implied class set contains the c-set {ace}. However, {ace} already is covered in the previous serial string and is crossed off. Closure is therefore complete and coverage is complete. We have found a group of three c-sets that will form an equivalent transition table. We now need to look only for a group of two c-sets that will provide both coverage and closure. Since the {bdf} branch would also require three or more states, we discontinue its development. Dropping down to the next branch, although {ab} completes coverage, it requires {ac} for closure. At least three c-sets would be required and we already have (larger) c-sets to provide that system. C-set {ac} requires another c-set both for coverage and for closure, and c-sets {ae} and {a} will require at least one more c-set for coverage. We then drop down to the next major branch and develop it following the same procedures. In this case, we find that two c-sets will cover and provide closure and this is the minimum. We select the larger sets {ace} and {bcdf}. Frequently, a case will arise where there are more than one unsatisfied closure requirement (to the right of a c-set). In this case, one of them is selected to form the next set of branches and the other c-sets are carried through to the right of these branches as stipulated in Step 2, Closure.

157

Chapter 8: Minimization of Transition Tables

Repeating once again, the algorithm begins with a coverage requirement (because there is no closure requirement yet). The focus changes to closure until all closure requirements are met. When all closure requirements are met, if coverage is incompete, the focus changes back to coverage. In general, closure requirements take precedence over coverage as you develop the tree. But if closure requirements are null, then coverage is pursued. The amount of effort can vary depending on the column originally selected. The results, however, will be the same. Finally, the transition table is formed by renaming the c-sets and setting up the next states to go to the appropriate c-set (by primitive state content) and filling in the output section (based on compatibility with the original primitive state outputs). The next states can be determined from the next state vectors or through a Huffman-Mealy closure test.

Current State (bcdf) (ace)

Transition Table z Next State Input (x) 0 1 Input (x) 0 1 1 0 1 0

Figure 8.13. Final Transition Table for Grasselli-Luccio Example


Problem 8.6. Find the m-sets for the following tables. Use the Grasselli-Luccio Method to find a minimum row transition table.

Transition Table Current Next State z State Input(x) Input(x) 0 1 0 1 a b c 0 b d 0 c f 0 0 d 0 e d 1 f e c 1 -

Transition Table Next State Current Input State State 00 01 10 a b c d b c b e c b f d f d e d f b d f g g a c

Ouput State Input State 11 00 01 10 d 0 1 d 0 1 1 1 0 c 1 0 f 1 0 e 0 0 g -

11 0 1 0

8.8. Additional Problems for Chapter 8 In the problems at the end of this chapter, most fundamental mode tables have been constructed with the output as a function of internal state only. These tables are easily changed to making the output a function of the total state by expanding the output section of the table to include columns for all input states. The value of the output for the collapsed table becomes the value of the stable state output, and the transient states are set in accordance with fundamental mode rules; that is, transitions of 0 to 0 are set to 0, transitions

158

Chapter 8: Minimization of Transition Tables

of 1 to 1 are set to 1 and the others are don't cares unless anticipation is desired. Those are set to the output value for the state to which they are going (see Chapter 7). The additional problems for Chapter 8 will contain stripped down tables (that is, the output section contains only one column). The transition table can be expanded to be a function of the input state as shown in Figure 8.14. Transition Table Next State Current Input (x) State 0 1 a a b b a b Transition Table Next State Output z Current Input (x) Input (x) State 0 1 0 1 a a b 0 b a b 1

z 0 0 1

Before Expansion After Expansion Figure 8.14. Expansion of Stripped-Down Tables


Problem 8.7. Figure 8.15 contains some stripped-down transition tables. Perform a Huffman-Mealy reduction on these tables to find minimal covers. Problem 8.8. Expand the following transition tables to permit the output to be a function of the total state. Perform Huffman-Mealy reduction on these tables to find minimal covers.

Current State a b c d e f Current State a b c d e f

Input State 00 01 10 a b c d b d c d b c f c a f a.

z 11 e e e e 0 0 0 1 1 0

Current Input State State 00 01 10 11 a a b c b d b e c a c e d d b f e b f e f d f e b. Current Input State State 00 01 10 11 a a b c b a b - d c a c e d - b c d e f c e f a f - d

z 0 0 0 1 1 1 z 0 0 0 1 1 1

Input State z 00 01 10 11 a b c - 0 a b - d 0 a c e 0 b c d 1 b f e 1 a f d 01

c. d. Figure 8.15. Stripped-Down Transition Tables for Problems 8.7 and 8.8

159

Chapter 8: Minimization of Transition Tables

Problem 8.9. Figure 8.16 contains some stripped-down transition tables. Perform a Huffman-Mealy reduction on these tables to find minimum covers. Problem 8.10. Expand the following transition tables to permit the output to be a function of the total state. Perform Huffman-Mealy reduction on these tables to find minimal covers.

Current State a b c d e f

Input State 00 01 10 a b c a b a c e f a e a f a.

z 11 d d d c d 0 0 0 1 0 1

Current Input State State 00 01 10 11 a a b c b a b - d c a c d d - b c d e e b f f a f d b.

z 0 0 0 1 1 0

Current State a b c d e f g h i

Input State 00 01 10 a b c d b f c d b c a b i f h i b c a h d i

z 11 e g e g e e 0 0 1 0 1 1 0 0 1

Current Input State State 00 01 10 11 a a e c b f b - d c a c h d e c d e f e - d f f b g g a - g h h i c h i a i - h

z 0 0 0 0 0 1 0 1 1

c. d. Figure 8.16. Stripped-Down Transition Tables for Problems 8.9 and 8.10

160

Chapter 8: Minimization of Transition Tables

Problem 8.11. Expand the following transition tables to permit the output to be a function of the total state. Perform Huffman-Mealy reduction on these tables to find minimal covers.

Current State a b c d e f

Input State 00 01 10 a b c a b a c e c a e b c a.

z 11 d d d d f 0 0 1 0 0 1

Current Input State State 00 01 10 11 a a b c b e b f c e c f d a d - g e e d c f - b c f g - b c g b. Current State a b c d e f g h i

z 0 0 0 0 0 0 1

Input State z 00 01 10 11 0 a b c - 0 0 a b - d 0 0 a c e 0 0 f c d 1 0 b g e 0 0 h f e 0 0 i g g e 0 1 h b c - 1 i b c - 0 c. d. Figure 8.17. Stripped-Down Transition Tables for Problems 8.9 and 8.10 Problem 8.12. Perform compatible sets analysis on the expanded tables of Figure 8.15. Problem 8.13. Perform compatible sets analysis on the expanded tables of Figure 8.16. Problem 8.14. Perform compatible sets analysis on the expanded tables of Figure 8.17. 00 a a e e e a 11 d f d f f d

Current State a b c d e f g h

Input State 01 10 b c b c b c g h g h g - h

161

Chapter 8: Minimization of Transition Tables

Problem 8.15. Using the method of your choice, find the minimum state graphs for the stripped down pulse mode transition tables in Figure 8.18.

C.S. a b c d e

Input State 0 1 b c a e b d b d a c

O.S. n 0 0 1 0 1

C.S. a b c d e f

Next State Input State 0 1 b e a c d a f a f b d b b.

Output Input State 0 1 0 0 0 1 0 0 1 0 1

a. Next State Input State 0 1 c e c b b a d e Output Input State 0 1 1 0 1 0 0 0 0 -

Current State a b c d e f

Current State a b c d e f

Next State Input State 01 10 a b a c d a e c c f a a

Out z 01 0 0 1 0 0 1

c. d. Figure 8.18. Stripped-Down Transition Tables for Problems 8.15

162

Chapter 8: Minimization of Transition Tables

Problem 8.16. Using the method of your choice, find the minimum state graphs for the stripped down pulse mode transition tables in Figures 8.19.

Transition Table Current Input State State 00 01 10 11 a a b b d b f e b e c d e b d c b e e d g e g f c b g g f g b g a. Transition Table Current Input State State 00 01 10 11 a a a a b b c d e c f h d h h e h g f f a g g a h h h h a

Out put n 0 1 0 0 1 0 1

Transition Table Current Input State State 00 01 10 11 a b d b c b a e f e c d a f a d b a e g e a d f c f b d g c g a a f d b.

Output Input State 00 01 10 0 0 0 0 0 0 0 1 1 1 0

11 0 0 1 0

Out put n 00 ----01 10 00

Transition Table Current Input State State 00 01 10 11 a a a b c b a a d b c a a a b d a c b e e a c f g f a a d e g a h f g h a a b c

Output Input State 00 01 10 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0

11 0 1 1 0

c. d. Figure 8.19. Stripped-Down Transition Tables for Problem 8.16 Note: The following problems are not particularly realistic, but are written to provide reasonable exercises on the Grasselli-Luccio Method. The transition tables will have no output section. Problem 8.17. Given the transition table at the right and the M-sets {abc}, {bd} and {cd}, use the Grasselli-Luccio method to find:. 0 1 b c a a. the prime c-sets b b b b. a minimum state transition table c d a d d a

163

Chapter 8: Minimization of Transition Tables

Problem 8.18. Given the transition table at the right and the M-sets {abc}, {bde}, and {cde}, use the Grasselli-Luccio method to find: 0 1 b a a a. the prime c-sets d b b b. a minimum state transition table c - a d e a e - c Problem 8.19. Given the transition table at the right and the M-sets {abd}, {ace}, {bde}, and {cd} use the Grasselli-Luccio method to find: 0 1 c b a a. the prime c-sets a b b b. a minimum state transition table c d e d c e - d Problem 8.20. Given the transition table at the right {cde} use the Grasselli-Luccio method to find: 0 c a a. the prime c-sets a b b. a minimum state transition table c d d c e Problem 8.21 . Given the transition table at the right {cde} use the Grasselli-Luccio method to find: 0 b a a. the prime c-sets d b b. a minimum state transition table c b d e e b Problem 8.22 . Given the transition table at the right {bce} use the Grasselli-Luccio method to find: 0 b a a. the prime c-sets a b b. a minimum state transition table c b d c e c f c

and the M-sets {acd}, {bde}, and


1 b b e d

and the M-sets {acd}, {bde}, and


1 c c c d c

and the M-sets {abc}, {adf}, and 1 a d d a f d

164

Chapter 8: Minimization of Transition Tables

Problem 8.23 . Given the transition table at the right and the M-sets {abc}, {adf}, and {bce} use the Grasselli-Luccio method to find: 0 1 b a a a. the prime c-sets a d b b. a minimum state transition table c b d d c f e c f f c d Problem 8.24. Given the transition table at the right and the M-sets {abc}, {acd}, and {ce}, {bd}, and {de}, use the Grasselli-Luccio method to find: 001 010 100 a a e a a. the prime c-sets c e b b. a minimum state transition table b c c c d e d a c b e e Problem 8.25 . Given the transition table at the right and the M-sets {abe}, {cdf}, and {bc}, and {ef}, use the Grasselli-Luccio method to find: 00 01 10 11 b c b e a a. the prime c-sets e b e b b. a minimum state transition table c e e c b d f e d b d c f e a d f e f b Problem 8.26. Given the transition table at the right and the M-sets {abe}, {bdf}, and {cef}, and {def}, use the Grasselli-Luccio method to find: 00 01 10 11 a c c d a a. the prime c-sets b e f b b. a minimum state transition table d e d c a e e d d a d e f e e f d f a Problem 8.27. Given the transition table at the right and the M-sets {abe}, {bdf}, and {cef}, and {def}, use the Grasselli-Luccio method to find: 00 01 10 11 a d f c a a. the prime c-sets e d f e b b. a minimum state transition table d f c c a f f d d d f f e e b c e f a

165

Chapter 8: Minimization of Transition Tables

Problem 8.28. Given the following stripped down transition tables, expand them to make the output a function of the total state and then find all prime c-sets using the GrasselliLuccio method. Complete the method, finding a minimum state transition table.
State a b c d e f a a a Input State 00 01 10 11 b b f f a. c c e e d d d 0 0 0 0 0 1 a b c d e f z State a c c Input State 00 01 10 11 b b d d b. f f 0 0 0 0 1 1 z

- e - e e e

166

Chapter 9: State Assignment for Sequential Circuits

9. State Assignment For Sequential Circuits


Once a transition table has been developed, the next step in designing a circuit is to select the memory devices to be used and to assign conditions on these memory devices to represent the states of the system being designed. This means that each row of the transition table will be assigned a condition on the memory elements. The memory elements to be used in this text will all be binary elements that can be controlled to "off" and "on" conditions represented by 0 and 1, respectively. A discussion of the way that the memory elements operate or are set to 0 or 1 will be deferred until the next chapter, since only the assignment on the conditions is of concern at this time. The condition of an individual element will be referred to as its state. The combined state of all the individual elements will determine the internal state of the circuit. The assignment of individual element states to the system internal states can make a substantial difference to the complexity or expense of the resultant circuit. If circuits are operating in pulse mode to eliminate complications created by delays and multiple signal changes, then economics becomes the motivating factor. If we are designing asynchronous or fundamental mode circuits, the designer must be concerned with the dangers that attend more than one memory element changing as the system moves from one internal state to another. The problem is referred to as a "race" problem, and only exists for fundamental mode. A perfect algorithm for state assignment does not exist. However, there are guidelines that will provide very good, if not optimal, designs. There are many approaches to this problem. The approach taken here is a single pass method that works with pair-wise adjacencies. (Adjacencies are in the Karnaugh map sense, representing edge-connected nodes in an n-cube.) This works very well for simple problems and brings into evidence, more clearly than other methods, the goal of the assignment process. 9.1. Rules for State Assignment Humphry developed a set of rules that helps to cluster the eventual control signals (on the Karnaugh map) to the memory elements. His rules are generally given as Rules 1, 2 and 3 below. We will refer to these rules as "Rules of Economy." For pulse mode circuits, they are all we need. However, for fundamental mode we will add rule 0 below. Rule 0 is added to eliminate races at the outset and takes highest priority when assigning states for fundamental mode. By putting Rule 0 at top priority, races will be eliminated, if possible, providing a one pass assignment algorithm. Once the constraints of Rule 0 are satisfied, if options still remain, then Humphry's rules of economy can be applied to obtain reasonable (if not minimum) cost realization. The rules are as follows: 0. (Fundamental Mode only) States that are connected on the flow graph must be adjacent or at least effectively adjacent. 1. (Humphry's Rule 1) Under a given input condition, those states that go to the same state should be adjacent. 2. (Humphry's Rule 2) For each state, those states that are the "next state" entries for adjacent input states should be adjacent. With multi-pulse circuits, states that share a common present state should be adjacent. 3. (Humphry's Rule 3) States with the same output class should be adjacent.

167

Chapter 9: State Assignment for Sequential Circuits

There are many ways that we can approach the application of these rules. Humphry suggested that first priority go to the application of Rule 1, second priority to Rule 2 and third priority to Rule 3. A Gray code over the binary number span provided by the required number of flip-flops is used to cluster the states to meet the adjacency requirements. For fundamental mode, a test for races is made after the assignment and, if it fails, a different selection for assignments is made. If we visualize the resultant circuit, there will be a set of combinational circuits to drive the memory elements, and there will be a set of combinational circuits to drive the outputs. The relative importance of Rule 3, which will tend to minimize the output combinational circuits, to Rules 1 and 2, which tend to minimize the combinational circuits driving the memory elements, should be in proportion to the number of circuits involved. This is also a function of the device type to be used, since some devices have 1 input and others have 2. The problem is not a simple one to solve. No method has been found to obtain a best solution (other than an exhaustive search over all possibilities). There are several methods, however, that will yield excellent results. 9.2. A Simple Method For Simple Circuits The method presented here will apply to both pulse mode and fundamental mode (by adding Rule 0). It is a one-pass method, since races are automatically eliminated in the process. Rules 1 and 2 are given equal priority. Rule 3 can be either weighted in proportion to the number of combinational circuits involved, or used only when there remains some freedom in assignment after the other rules have been applied. Adjacencies affecting economical assignment are weighted in accordance to the number of pair-wise appearances in the Karnaugh map through a simple tabulating technique referred to here as "voting." Transition Table Current State a b c d Next State Input State (x1,x2) a d a d b b b d c c c a d c d 0 0 1 Output State Input State (x1,x2) 0 0 1 0 0 0 0 1 0

Figure 9.1. Transition Table for State Assignment The method is presented through the example in Figure 9.1. Since the circuit is fundamental mode, Rule 0 must be applied. An examination of Rule 0 and Rule 2 will show that pairs of states connected in the state graph will be a subset of the pairs of states that we obtain from Rule 2. The result is that Rule 2 will be used to obtain pairs of states that should be adjacent, but the pairs will be divided into three sets; two sets for Rule 0 representing the constraints required to prevent races, and one set containing all other pairs from Rule 2 that are of concern only from an economic point of view. Rule 0 is used to obtain pairs of states that must be (effectively) adjacent. At this point, it is necessary to discuss what is meant by effectively adjacent. Fundamental mode operation requires that only one signal change at a time. Since the output signals from the

168

Chapter 9: State Assignment for Sequential Circuits

memory devices are being used in the combinational circuits internally, not more than one device can be allowed to change at a time. Observe the transition table in Figure 9.1. Consider the system in state a with input state 00. As the input state changes to 01, the system will move to state b. In this particular case, state b does not have to be adjacent to state a directly, since we can patch up the table and have state a go to state c first, where signals will be generated to drive the system to state b. If state c is adjacent to state a and state b is adjacent to state c, then the requirement of only one memory element changing at a time will be met. The relative motion in the output section must not produce a conflict (in this case it would be necessary that the transient cell in state c be modified also, changing the don't care to a zero). When a multiple state transition reaches the proper state through adjacent internal states, then for all effective purposes, the transition is to an adjacent state. If we examine state a under input state 11 and visualize the input state change to 10, we see that the table can be modified to go to state d, or it can be modified to go to state b. However, if it is modified to go to state b, two don't cares will be lost (one in the next state section and one in the output section). Notice that stable state b, under input state 02 will move under input state 00 to state d. In this case, there is no freedom and internal states b and d will have to be adjacent in the final state assignment. Rule 0 has been developed to provide pairs of states in two categories, those that must be adjacent and those that can be patched up to be effectively adjacent. Let us now examine Rule 2 in detail. "For each state" this can be translated to say, "For every row." This rule will be applied to each row. "Those states which are the next state entries" (the next state entries are those names contained in the cells in the next state portion of the table) For adjacent input states" (e.g., {00-01}, {00-10}, {01-11}, {10-11}) "should be adjacent." We can visualize the input states being rearranged in Karnaugh map order. This rule would then tend to make states that are adjacent on the Karnaugh map (within each row) also adjacent in their binary coding. This will have a tendency to reduce the circuitry that is driving the memory elements, since fewer elements will be involved in the change of state. In Figure 9.2, the next state portion of the transition table has been separated from the transition table to allow the discussion to focus on Rules 0, 1 and 2. Since Rule 2 is a row operation, the pairs that should be adjacent under this rule are written to the right of the row. The pairs are segregated into three categories, two for Rule 0 and one for the remainder. The two categories for Rule 0 are headed "Must" for those transitions which have no options and "Options" for those transitions where options exist. The third category is simply headed "Economics."

169

Chapter 9: State Assignment for Sequential Circuits

Input state (x1,x2) 00 01 10 11 a a b c a b d b d c a b c c d d d c d Rule 1 (a,c) (a,b) (a,c) (b,d) (b,d) (a,c) (a,d) (b,c) (c,d) State

Must (b,d)(b,d) (a,c)

Rule 2 Rule 0 Options (a,b)(a,c)(a,b)(a,c) (b,c) (c,d)(c,d) Voting Set Option 2 0 1 0 a c 1 d b

Economy

(a,b)

Voting set Option 1 0 1 0 a c 1 b d

Assignment (y1,y2)

Voting Set (a,b) (a,d) (b,c) (c,d) Totals

|||| | || ||| 7 3 Figure 9.2. State Assignment Voting on Options

Using Rule 2, adjacent input states are {00,01}, {00,10}, {01-11} and {10-11}. In the first row, the pairs for the adjacent input states are (a,b), (a,c), (b,a) and (c,a). We need to examine these pairs to see into which categories they fall. Rule 0 will always have one state of the pair encircled since it is concerned with a transition on the state graph. Under input state 00, state a may go to either state b (with input state 01) or to state c (with input state 10). There are options for both of these so they are (both) entered in the options category. Under input state 11, the same transitions can occur. The entries are entered (again) in the options column. Pair (a,c) is the same as pair (c,a) and they should both be entered as (a,c) to make our work easier later. Moving to the next row, the associated pairs for adjacent inputs are (d,b) and (b,d). The stable state under input state 01 will move to state d under input state 00. There is no option and so (b,d) is a "must" entry. The transition that occurs in moving to state d under input state 11 is also a must entry. It is entered as well (one must is enough, but these entries would be in the voting set if another flip-flop had to be added). Note that for adjacent input pairs 00-01 with internal state c, no stable state is involved. The pair (a,b) does not fall under Rule 0. For economic reasons, they should be adjacent. They are entered under the "Economics" column. We now examine Rule 1 in detail. 1. "Under a given input condition" can be translated to "For each column in the transition table."

170

Chapter 9: State Assignment for Sequential Circuits

2. "Those states which go to the same next state should be adjacent" translates to "Those rows which have the same entries in the next state column should be adjacent." 3. This is a column operation. Referring to Figure 9.2, pairs of states that have the same state for their next state entry in each column are listed directly below that column. Rule 3 would be viewed as a row operation if total output class (across all input states) were used. However, it is more likely to produce a minimum cost map if it is treated as a column operation. Its implementation is therefore the same as Rule 1, except that it works with the output section of the transition table. In this problem, there will be two memory elements and there will be two combinational circuits (at least) generated from the next state section of the transition table. There will be only one output combinational network. Pairs generated from Rule 3 should have (at most) half the weight of those generated from Rule 1. 9.2.1. Effects of "Must" Entries (Fundamental Mode Only) Once all pairs that should be adjacent have been noted, there are still several strategies that could be followed. In fundamental mode, there may very well be entries in the "must" column. These entries may: 1. Completely constrain the assignment, in which case there is no freedom in establishing the state adjacencies on the Karnaugh map. 2. Establish some adjacencies, leaving some to be determined based on economics. 3. Over-constrain the assignment. In the last case, it is not possible to realize the resultant circuit with that number of memory elements. The only way to continue the design is to add a memory element, which allows the table to have twice as many rows (and then the system can have twice as many states). This will cause all of the entries in the "Must" column to move over to the "Options" column. However, if only one entry was made for each required adjacency, we need to see if it should be entered more than once in the Options column in order to obtain a valid "vote." In the first case, the only remaining choices left to the designer are which state to make all memory devices "off" or "on," and one or two additional choices regarding symmetries that do not carry any substantial impact on system cost. For the second case, where there are some additional choices remaining with regard to adjacencies, we first make the assignments as required by the "must" entries and then develop a strategy for making the remaining assignments. Two reasonable strategies are: 1. If there are many degrees of freedom left in the assignment, then make assignments to those pairs that appear the greatest number of times in the voting set. 2. If there are just a few degrees of freedom left, set up a table of all possible remaining options and see which assignment covers the most pairs of desired adjacencies. The second of these strategies is developed in Figure 9.2. The total process for that sample problem includes the following: 1. Pairs under the "Must" column have to be adjacent and these assignments are made first.

171

Chapter 9: State Assignment for Sequential Circuits

2. A voting set is made up from all pairs from Rule 2 and from Rule 1, excluding the pairs in the "Must" column. 3. In this case, only two options remain and these are examined with respect to the voting set to see which option covers the most remaining pairs of desired adjacencies. Since four states are needed, two memory elements are required (call them y1 and y2). There is generally no economic reason for selecting any particular state as the state with all memory elements "off" or "on," although in some cases it might help to simplify the output combinational circuits if the output of the memory elements can be used directly. In those cases where the memory elements have their outputs available in both uncomplimented and complemented forms, there is even less concern. Usually, Fundamental Mode designs will not have both output forms available. It is also important to note that the state assignments derived can always be rearranged on the Karnaugh map, as long as the adjacency relationships are not disturbed. We will simply select state a as being (y1,y2) = 00. From the "Must" column, we see that c must be adjacent to a. Thus, c could be either 01 or 10. This is a simple symmetry difference and is usually arbitrary. Again, we can always rearrange the entries on the Karnaugh map when we are finished with respect to symmetries. Let us arbitrarily select 10. Now, also in the Must column, b must be adjacent to d. If there were an additional must entry of (a,b) or (c,d), the system would be fully constrained and no further choices would exist. Also, two more constraints of the type (a,d) (c,d) would over-constrain the assignment and another memory element would have to be used. In this case, the constraints leave two options (only) and the voting strategy is an excellent one to use. The Karnaugh Maps for the two options are shown in Figure 9.2, with the results of the voting process (y1 and y2 are the memory elements). 1. All "must" entries are removed from the voting set (crossed off) to reduce the effort. (They will be met in both options.) 2. The voting proceeds by tabulation to show how many pairs are covered in the two options. 3. The option with the greatest number of pairs covered is selected. In this example, option one with (a,b) and (c,d) adjacencies is selected. It is now necessary to return to the table to see if there is sufficient freedom to patch up the table to meet the requirement on effective adjacencies in the options column. In this case (a,b) and (c,d) have been met through the voting process, but (b,c) has not. Attention is thus drawn to row C and it is seen that the 11-01 input state transition requires attention. The situation is corrected by assigning the next state entry to a. The transitions from c to b will be from c to a first (which results in only one element changing), and then to b which results in the other element changing. There is no need to modify anything in the output section, since the don't care is permissible for the sequence.

172

Chapter 9: State Assignment for Sequential Circuits

The final transition table with state assignments, ready for final circuit design, is shown in Figure 9.3. Note that the rows just happen to be in binary order. This will not generally be the case. Transition Table Current State (y1,y2) (a) 00 (b) 01 (c) 10 (d) 11 Next State Input State (x1,x2) 00 00 11 00 11 01 01 01 00 11 10 10 10 10 11 00 11 10 11 00 0 0 1 1 Output State Input State (x1,x2) 01 0 0 10 0 0 11 0 0 1 0

Figure 9.3. Transition Table with States Assigned


9.3. State Assignment Strategies for Pulse Mode There are no races in pulse mode and therefore Rule 0 is not applied. The process for Rules 1, 2 and 3 are the same, but for Rule 2, all entries will be placed in the Economics category. Since there are no constraints to reduce the options down to a reasonable number for voting purposes, the first strategy would be used, simply covering pairs (making them adjacent), with priority being given to the pairs appearing the most times in the voting set. With multi-pulse input, the pulses are not adjacent inputs in the transition table. However, we will see in Chapter 10 that to minimize the pulse input combinational circuit, they should be viewed as being adjacent. 9.3.1. Exceptions to the Rules It was mentioned earlier that Rule 3 should be given a weight that is roughly proportional to the number of output combinational circuits required to the number of combinational circuits needed to drive the memory elements. If the circuit is a binary counter and requires a display of the count in binary or is to feed a binary to decimal converter, there will need to be as many output combinational circuits as there are binary bits in the count. This is considered an overriding situation and the state assignment is made so that the outputs of the memory devices can feed the output directly. (The output combinational circuits thus become connections directly from the output of the memory elements to the output pins.) 9.3.2. Another Example To conclude the chapter, another example is presented. This example is the same as Sample Problem No. 1 in Chapter 8 except that the output is a function of internal states only. Figure 9.4 shows the development of the minimum row transition table and Figure 9.5 shows the state assignment process. Since two memory elements are required, four states are available and the transition table is drawn as shown in Figure 9.5, including all four states. State provides an optional state for every column and therefore there are no entries in the "must" column. It should be noted that this will be the case (available 'if needed' states) unless there are exactly 2n states in the reduced state graph.

173

Chapter 9: State Assignment for Sequential Circuits

For this example, the options available under input state 00 will be resolved later as optional state assignments and are not considered in the Rule 2 and Rule 1 entries. Note that the column for input state 10 has not been used with Rule 1. The reason is that all combinations of pairs of states would get one entry, which would have a net influence of zero in the voting. Transition Table Next State Current Input state (x1,x2) State 00 01 10 11 a a b c b a b d c a c e d b c d e f c e f a f e

Out z 0 0 0 0 1 1

b c d,e xx d d,e x x e xxxx xx xx f xxxxxx xx a b c d e

R/(abcd,ef)/(b.c)/(c,d) R/(abcd,ef)/(c,bd) R/(abcd,cef,bdef}) {ef}, {abd},{ac}

Current State {abd} {ab} {ef}

Transition Table Next State Input state (x1,x2) 00 01 10 11 , , ,

Out z 0 0 1

Figure 9.4. Transition Table for State Assignment, Example No. 2 Since the starting cell is arbitrary, why not have the output occur with both memory elements on? If we do, then the y1y2 cell becomes state . We also notice that if y1y2 could be kept as , then the output could be formed with just y2 being on, eliminating an output combinational circuit completely. When we place ((,) received the most votes), we place it in the y1y2 cell. (,) was next in number of votes received and so is next placed in y1y2, leaving to occupy the y1y2 cell. All states have been assigned and all the desired adjacencies, except (,), have been met.

174

Chapter 9: State Assignment for Sequential Circuits

We must now return to the Options column in the Rule 0 part of the table and to the transition table to see if the table must undergo patching. This is the principle reason for maintaining the Options column separate from the Economics column. Each of the pairs must be checked to insure that the transition from the stable state to the next stable state will occur through single flip-flop changes. In this case, they have all been met through the voting process and no patching is required.
State 00 Rule 1, 3 (,) (,) (,) Input State (x 1x 2) 01 (,) 10 11 (,) z 0 0 1 1 (,) Assignment y1 y2 0 1 0 1 ----------------Rule 2 -------------------------Rule 0--------Must Options (,) (,) (,) Economics (,)

Voting (,) 2+1/2 (,) 3

(,) 1 Figure 9.5. State Assignment Voting on Adjacencies

There remains only the selection of the option in Column 00. There are 222 = 8 combinations that could be investigated. Many authors suggest that you select the state to be the same as the row, if that is an option. This is a pretty good rule. However, in this case, we will investigate a few of the combinations to see their effect. All combinations are shown in Figure 9.6. First note combination No 4. In this case, goes to and goes to . This would result in the circuit going into a high frequency oscillation. These situations are frequently referred to as "buzzers," a term going back to relay days when the mechanical oscillation was quite audible. Of course, this assignment would be unacceptable, and not allowed. Similarly, combination No. 5 can be ruled out. Figure 9.6 also shows the reinforcement to the number of adjacencies for the state assignments selected in Figure 9.5. Looking at the column for input state 00 (Rule 1), we see that if all are , then we pick up four adjacencies in (,) and three in (,) for a total of seven. We also note that if the row entry is an , we will have to patch the table, since and are not adjacent under the current assignment. This basically rules out columns 2 and 6, but we continue with the voting process. If we select all entries to be , then we pick up two adjacencies in (,) five in (,) and one in (,) for a total of eight. (,) and (,) adjacencies are preferred and, although combinations 1 and 3 also have eight reinforcements, combination 7 is selected.

175

Chapter 9: State Assignment for Sequential Circuits

Note also that this provides an 8-cell group (with the don't cares), which is also highly desirable. It should be mentioned that the multiple transitions created by patching will slow down a circuit's operation, and if speed is important, that can be an over-riding factor. Comb.

Combinations 3 4 5

Reinforcement in Existing Adjacencies


(,) (,) (,) (,) 4 0 3 0 3 1 3 1 1 1 5 0 1 3 3 1 0 3 4 0 0 5 2 1

Totals 7 8 7 8 7 8 Figure 9.6. All Possible Combinations on Assignments for Input State 00 Showing Reinforcements to Existing Adjacencies. There is one final check to be made. Since the column for input state 00 was not included in the Rule 0 analysis, we must make certain that the combination of assignments as selected does not have any races. This means that Rule 0 should now be applied to that column 1 to see if any races might possibly exist. In this case, there are no further modifications required. The final resulting transition tables are shown in Figure 9.7. Transition Table Next State Current Input state (x1,x2) Out State 00 01 10 11 z 0 0 1 Transition Table Next State Current State Input state (x1,x2) Out y1,y2 00 01 10 11 z 0 00 10 00 10 00 0 10 10 00 10 11 1 11 10 11 10 11 01

Figure 9.7. Transition Table with State Assignments, Example No. 2


9.4. Concluding Remarks State Assignment is a messy process and there is no algorithm other than an exhaustive search that will guarantee a best selection. The methods shown here will give good results and allow the designer to have a better understanding of the design, the options available, and the effect of the selection between options.

176

Chapter 9: State Assignment for Sequential Circuits

For fundamental mode operation, there are two concerns, races and economics. Races are not permitted, and the process must ensure that only one memory element will change under any allowable change in the input state. Once required adjacencies have been met, the rules of Humphry may be applied to obtain a reasonable circuit from the standpoint of cost. For Pulse Mode operation, the Rules of Humphry are used to obtain a circuit with reasonable cost. The selection of states on the Karnaugh map is not always as easy and straightforward as the simple examples presented here. Unless we have a computer program to aid in the clustering process, success is a function of our ability to perceive the positions of states that tend to maximize the desired adjacencies. For fundamental mode, observe the cycles or loops in the state graph and the states which are common to the loops. Visualize the loops lying in the Karnaugh map, with the common states lying in the intersection of the loops.

177

Chapter 9: State Assignment for Sequential Circuits

Problem 9.1. Develop state assignments for the following fundamental mode transition tables.

Current State a b

State Input state(x1,x2) 00 01 10 11 a b b b a a a. a b

Output State Input state(x1,x2) 00 01 10 11


0 1 0 0 0 0 1 0

Current State a b

State Input state(x1,x2) 00 01 10 11 a b a a a b b. b b

Output State Input state(x1,x2) 00 01 10 11


0 1 0 1 0 1 0 1

Current State a b c d

State Input state(x1,x2) 00 01 10 11 a a d d a,b -,b c,c,d c b c b c. a a d d

Output State Input state(x1,x2) 00 01 10 11


0 ,0 0 ,0 0, 0 0 1 0 ,0

0 0

0 -

0 0

178

Chapter 9: State Assignment for Sequential Circuits

Problem 9.2. tables.

Develop state assignments for the following fundamental mode transition State Input state(x1,x2) 00 01 10 11 a b a b a a c c b b d d a. c d c d Output State Input state(x1,x2) 00 01 10 11
0 1 0 1 1 1

Current State a b c d . Current State a b c d

State Input state(x1,x2) 00 01 10 11 a b b a a c a a c c c b. State Input state(x1,x2) 00 01 10 11 a b a a a c c a c c c d c. b b d d d c d

Output State Input state(x1,x2) 00 01 10 11


0 1 0 1 0 0 1 1 0

1 -

1 0

Current State a b c d

Output State Input state(x1,x2) 00 01 10 11


0 1 0 1 1 0 1 1 1

179

Chapter 9: State Assignment for Sequential Circuits

Problem 9.3. Develop state assignments for the following pulse mode transition tables.
0
a b c d a c a c

1
b d b d a.

0
0 0 1 0

1
0 1 0 0 a b c d

01 a c d a

10 b b b b b.

01 0 0 0 0

10 0 0 0 1

0
a b c d e b b d c e c.

1
c c c a b

n 0 1 0 1 1 a b c d e f g h i j k l m

0 a a
b c d e f g h i j k l

1 n b 0000 c 0001
d e f g h i 0010 0011 0100 0101 0110 0111

j 1000 k 1001 l 1010 m 1011 m 1100 d.

180

Chapter 9: State Assignment for Sequential Circuits

Problem 9.4. Develop state assignments for the following fundamental mode transition table.

Transition Table Next State Ouput State Input State Input State Current State 00 01 10 11 00 01 10 11 a b a a 0 0 0 0 a b b b c 0 0 0 b c c - d f - 1 d e d d d 1 0 1 1 e e a c 1 1 - 1 e f f 0 0 0 1 f g f g g f g a 0 1 0 -

181

Chapter 9: State Assignment for Sequential Circuits

Problem 9.5. Develop state assignments for the following fundamental mode transition tables.

Transition Table Next State Current State a b c d Input State (x1,x2) 00 a a a d 01 a b a b 10 b b c b a Transition Table Next State Current State a b c d Input State (x1,x2) 00 a a a a 01 a b c c 10 b b d d b. Transition Table Next State Current State a b c d Input State (x1,x2) 00 a d a d 01 b b d 10 a c c c c. 11 c b c b 00 0 0 0 0 Output State Input State (x1,x2) 01 0 0 1 10 0 0 0 11 1 1 1 11 d c c d 00 0 0 Output State Input State (x1,x2) 01 0 0 1 1 10 0 1 1 1 11 0 0 1 11 x c c d 00 0 0 1 Output State Input State (x1,x2) 01 0 1 0 1 10 1 0 1 11 0 1

182

Chapter 9: State Assignment for Sequential Circuits

Problem 9.6. Develop state assignments for the following fundamental mode transition tables.

Transition Table Next State Current State a b c d Input State (x1,x2) 00 a d a d 01 a,c b,d a,c b,d 10 b b c c a. Transition Table Next State Current State a b c d Input State (x1,x2) 00 a d a d 01 b b b 10 a c c c b. Transition Table Next State Current State a b c d Input State (x1,x2) 00 a a d d 01 b b b 10 c c c c c. 11 b b 00 0 0 1 1 Output State Input State (x1,x2) 01 0 0 10 1 1 11 0 11 d b d d 00 0 0 0 Output State Input State (x1,x2) 01 0 0 1 0 10 0 1 1 1 11 0 0 1 11 a d a d 00 0 0 0 Output State Input State (x1,x2) 01 0 0,0,0 10 0 0 1 1 11 0 0 0

183

Chapter 9: State Assignment for Sequential Circuits

Problem 9.7. Develop state assignments for the following fundamental mode transition tables.

Transition Table Next State Current State a b c d Input State (x1,x2) 00 a c c c 01 b b b d 10 a c c c a. Transition Table Next State Current State a b c d Input State (x1,x2) 00 a a c a 01 b b d d 10 c b c c b. Transition Table Next State Current State a b c d Input State (x1,x2) 00 a d a d 01 b b d d 10 a c c a c. 11 c b c b 00 0 0 0 0 Output State Input State (x1,x2) 01 0 0 1 1 10 0 1 1 0 11 1 1 1 11 a a a d 00 0 0 1 Output State Input State (x1,x2) 01 0 0 1 1 10 0 1 0 0 11 0 0 0 0 11 b b d d 00 0 1 1 Output State Input State (x1,x2) 01 0 0 0 10 0 1 1 11 1 1 1

184

Chapter 9: State Assignment for Sequential Circuits

Problem 9.8. Given the following transition table, develop state assignments:

Transition Table Next State Current State a b c d Input State (x1,x2) 00 a b a c 01 b c d a 10 b c d c a. Transition Table Next State Current State a b c d Input State (x1,x2) 00 a a a a 01 a b c c 10 b b d d b. Transition Table Next State Current State a b c d Input State (x1,x2) 00 a d a d 01 b b d d 10 a c c a c. 11 c b c b 00 0 0 0 0 Output State Input State (x1,x2) 01 0 0 1 10 0 0 0 11 1 1 1 11 d c c d 00 0 0 Output State Input State (x1,x2) 01 0 0 1 1 10 0 1 1 1 11 0 0 1 11 c a c b 00 0 Output State Input State (x1,x2) 01 0 1 1 10 0 1 0 11 0 0 1

185

Chapter 9: State Assignment for Sequential Circuits

Problem 9.9. Given the following transition table, develop state assignments: a. Using the voting method. b. Using multiple state assignments to achieve optimal speed.

Transition Table Current State a b c d Next State Input State (x1,x2) a c c a b b d d c b c b a d a d 0 0 0 0 Output State Input State (x1,x2) 0 0 1 0 0 0 0 0 0 1

186

Chapter 10: Sequential Circuit Design

10. Sequential Circuit Design


Once the state assignments have been made, the circuit design can be completed in a very straightforward manner. Three examples will be presented in this chapter. Fundamental Mode Pulse mode with levels input and one synchronizing pulse Multi-pulse circuits Although all three circuits will be designed with very similar techniques, there are special characteristics for each type of design that need to be emphasized. Before continuing, it is necessary to discuss the memory elements that are available for use. 10.1. Memory Elements There is a large variety of memory elements available for designing sequential circuits. They all consist of a memory element which can be set "on" and "off." They differ as a function of the mode of operation. For example, in fundamental mode, we do not generally use any element as such, but design the circuit using the delay inherent in the combinational circuits as a virtual memory element. For pulse mode circuits, a variety of elements exist and their use will depend on the type of pulse mode circuit being designed. Table 10.1 shows several common kinds of memory elements and the type of circuits with which they would most likely be used. Table 10.2 shows the truth tables based on standard operation for several commonly used memory elements. For J-K and D type flip-flops, the pulse input is not shown as a part of the domain. It is understood that the memory element cannot change state unless a pulse occurs. The table shows the next state that will occur if the pulse input goes high with the level inputs as shown. Notice that with D elements, the subsequent setting of the memory element is not a function of its present setting. In general, however, the present setting is a necessary part of the domain.
Table 10.1. Commonly Used Memory Elements

Memory Element

Fundamental Mode

Clocked Circuits

Multi-pulse

Delay Element R-S Flip-Flop x x x Delay Flip-Flop x x J-K Flip-Flop x x T-Flip-Flop x x 10.2. Fundamental Mode Fundamental mode circuits are designed around virtual delay elements. These delay elements can be associated with the delay through combinational circuits or in the action of relays. Circuits designed with combinational circuits are as fast as those made with any given technology. They are also sensitive to relatively short delays in signals and the circuit must be designed for hazard-free operation and the transition table must be examined for sensitivities referred to as essential hazards. The delay element is a concept that applies to any device that has a delay from its input to its output. The delay element has the property that the signal at the input of the device will be present at the output of the device a short time later. This applies to almost any device, including a piece of wire. It may appear to be a trivial property, but in a feedback situation, it becomes a memory element. This property will be more easily observed when analyzing the results of the design with the first example. All combinational
187

Chapter 10: Sequential Circuit Design

circuits will be considered to consist of a delay-free combinational circuit with a delay element at the output. Circuits designed around relays must view the relay as a delay element. Table 10.2. Truth Tables For Common Memory Elements R-S Flip Flop R S Qn+1 0 0 0 0 1 1 1 0 0 1 1 0 0 1 0 1 1 1 0 0 1 1 Delay D Qn+1 0 0 1 1 0 0 1 1 R-S Flip Flop R S Qn+1 0 0 0 0 1 0 1 0 1 1 1 1 0 0 1 0 1 1 1 0 0 1 1 0

Qn 0 0 0 0 1 1 1 1

Qn 0 0 0 0 1 1 1 1

Qn 0 0 1 1

T Flip-Flop Qn D Qn+1 0 0 0 0 1 1 1 0 1 1 1 0

Fundamental mode circuits can be designed around the asynchronous operations available with most commercially available flip-flops. For example, most flip-flops are designed with R (reset) and S (set) terminals that are used to establish desired internal states on power-up or other reset types of situations. A high value on the S terminal will cause the memory element to be set "on" and a high value on the R terminal will cause it to be set "off." This action gives us what is termed (generically) an R-S flip-flop. The R-S flip-flop is considered to have the constraint that the two terminals may never be high at the same time. Actually, most R-S flip-flops once built will take on a particular state when both the R and S terminals are high, and a design process could use this property. However, it would be nonstandard, and for use in fundamental mode circuits, additional constraints would have to exist at the input terminals of the flip-flop to ensure that the design would never call for both inputs to change at the same time. Therefore, the standard design procedures must restrict the combinational circuits feeding the set and reset terminals in such a way that the set and reset inputs will never be high at the same time. The R-S flip-flop does have the advantage that its action is solid if fed from a switch with bouncing contacts. Multiple pulses on the set input can only result in the flip-flop being set on and multiple pulses on the reset input can only result in the flip-flop being set off. Some of the other flip-flops can also be used in fundamental mode. For example, some D flip-flops may be used in an asynchronous fashion with level-type inputs on the D and C inputs. The truth and excitation tables for this device are shown in Table 10.3. The options in columns D and C of the excitation table must go together; i.e., D = 0, C = +; or D

188

Chapter 10: Sequential Circuit Design

= +, C = 0, etc. This provides some options that are best analyzed after the Karnaugh maps have been constructed. Table 10.3. D Flip-Flop With Level Inputs Qn D 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1 C 0 1 0 1 0 1 0 1 Qn+1 0 0 0 1 1 0 1 1 Qn 0 0 1 1 Qn+1 0 1 0 1 D (0,+) 1 0 (1,+) C (+,0) 1 1 (+,0)

Figure 10.1 shows the block diagram concepts and how the memory elements will be used in sequential circuit design.

X Y

CC

yi

X Y X Y

CC

Q _ Q

yi

The Delay Element CC: Combinational Circuit X: Inputs (Levels) Y: Internal States (Levels)

CC

yi

The R-S Flip-Flop Figure 10.1. Fundamental Mode Memory Elements

10.2.1. Pulse Mode with Levels Input and One Synchronizing Pulse Circuits that are designed around a clock, with pulses intended to guarantee that all circuits have quiesced before further action is determined, will be designed around J-K or D flip-flops.

D flip-flops have been called Delay flip-flops and Data flip-flops. However, for design purposes, they respond as the delay element in a fundamental mode circuit. It is guaranteed that when the next pulse occurs, the value at the output will be the value at the input at the last pulse.

189

Chapter 10: Sequential Circuit Design

Q X Y CC D _ Q C

X Y X Y

CC

Q _ Q C

CC

Clock Pulse

Figure 10.2. Pulse Mode Memory Elements with Level Inputs. J-K flip-flops respond similarly to R-S flip-flops, except that the J and K inputs have level-type signals input and their state determines the next state of the flip-flop when the clock pulse occurs. The J input operates much like the set input. If it is high with the K input being low when the clock pulse occurs, the memory element will be set "on." If the K input is high with the J input being low when the clock pulse occurs, the memory element will be set "off." Unlike the R-S flip-flop, both inputs may be high at pulse time. If this is so, then when the clock pulse occurs, the memory element will change its state (called toggling). That is, if it was "on" when the pulse occurred, it will be turned "off." If it was "off" when the pulse occurred, it will be turned "on." If both levels are low at pulse time, the memory element does not change. 10.2.2. Multiple Pulse Circuits Multipulse circuits can be designed around many kinds of flip-flops. If D and J-K flip-flops are used, then combinational circuits will be required in the clock input. The Trigger or Toggle flip-flop, called a T flip-flop, has the property that a pulse on the T input will cause the memory element to toggle. Note that a J-K flip-flop, with its J and K inputs tied high, allows the clock input to be treated as the T input of a T flip-flop. The R-S flip-flop may be used in all the above situations. However, with pulses mode operation, the pulses must be brought into the combinational networks (normally cnf design with the pulse brought into the output and gate).

190

Chapter 10: Sequential Circuit Design

Q X Y CC D _ Q C

X Y X Y

CC

yi

yi

CC

K Q C

yi

X Y P

CC

X: Level Inputs Y: Internal State Inputs The D Flip-Flop The J-K Flip-Flop Figure 10.3. D and J-K Flip-Flops With Multiple Pulse Inputs The T flip-flop can be used in any pulse mode situation. Since they almost always have R and S terminals available also, they are sometimes designed as R-S-T flip-flops. When used in this way, only one input may be high at a time. 10.2.3. Memory Element Design Characteristics There are many methods for setting up the combinational circuits that drive the flipflops. Some of these methods are designed around the full table of combinations for the flipflop which considers the output of the flip-flop as one of the inputs. These tables are required for analyzing the final designs, and the more popular types of flip-flops were shown in Table 10.2. The method described here considers the flip-flop as a black-box that must be excited in a particular way in order to achieve the desired state. That is, given the present state of the flip-flop, what must be done to get the flip-flop into the state desired? The conditions required for setting each principal kind of flip-flop are shown in Table 10.4. On the left, Qn represents the present state of the memory element and Qn+1 represents the next state desired. With pulse mode, we may think of the subscript n as representing the nth clock pulse and the n+1 subscript as indicating the time one pulse later. However, with fundamental mode, the subscripts must represent the present state and the next state respectively. Q = 0 represents the memory element as "off" and Q = 1 represents the memory element as "on."

X Y P P: Pulse Inputs

CC

191

Chapter 10: Sequential Circuit Design

Table 10.4. Device Excitation Table Qn 0 0 1 1 Qn+1 0 1 0 1 D1 D 0 1 0 1 R-S2 R S + 0 0 1 1 0 0 + J-K J K 0 + 1 + + 1 + 0 T T 0 1 1 0 R-S-T3 R S + 0 0 1 1 0 0 + T 0 1 1 0

1. The D element is used for fundamental mode and the D flip-flop is used for pulse mode. 2. R and S inputs may not both be high at the same time. 3. Only one input may be high at a time. 10.2.4. The R-S Flip-Flop Consider the R-S flip-flop. If Qn = 0 and we desire Qn+1 = 0, what action is required? The S input must not go high, since that would cause Qn+1 to be 1. Therefore, S must be 0. If the R input remains at 0, then with both inputs low, there will be no change in the state of the flip-flop and so Qn+1 will be 0. However, if R were to go high, the result would be the same, since the resetting action always results in Qn+1 = 0. Therefore, we don't care whether R is high or low. It is important from the standpoint of simplicity and economy that don't cares be maintained as long as possible in the design process. If Qn is 0 and we desire Qn+1 to be 1, the set input must be set to 1. Because of the constraint that the R and S inputs cannot be high at the same time, the R input must be low. If Qn is high and we desire Qn+1 to be low, we must set the R input high. Again, because of the R-S constraint that both inputs cannot be high at the same time, the S input must be low. If Qn is high and we desire that it remain high, we must not have the R input go high (R = 0), but we may have the S input either go high or remain low (S = +). 10.2.5. J-K Flip-Flops The J-K flip-flops are very much like the R-S flip flops with the J input acting as a set input and the K input acting as a reset input. However, the fact that both inputs may be high, with a toggling of the memory element state, results in more don't cares in the design table. First, consider Qn = 0 and we desire Qn+1 = 0. Since we do not have the constraint that J and K cannot both be high, we must consider all possibilities. If both inputs are low, the flip-flop will remain in its present state. If the K input alone is high, the flip-flop would be reset. This will also result in the desired effect. If the J input is high, then if the K input is low, the flip-flop will be set high (not what we want). Or if the K input is high also, the flipflop will toggle, producing Qn+1 = 1 (also not what we want). Thus, the J input must remain low. This results in the same entries as for the R-S flip-flop (K = +, J = 0). If Qn = 0 and we desire Qn+1 = 1, we must turn on the flip-flop. This means the J input must be high. If the J input is high and the K input is low, the flip-flop will be set on. If both inputs are high, the flip-flop toggles. This also results in the flip-flop being turned on. Therefore, we have a don't care for the K input and J = 1. In a similar fashion, if Qn = 1 and we desire Qn+1 = 0, then we may either reset the flip-flop or toggle it. This results in K = 1, J = +.
192

Chapter 10: Sequential Circuit Design

Finally if Qn = 1 and we desire Qn+1 = 1, we find K must remain low, and J may be high or low (K = 0, J = +). 10.2.6. D Elements and D Flip-Flops D elements and D flip-flops are particularly simple because at the input at time n, we must have the state desired at time n + 1. 10.2.7. T Flip-Flops T flip-flops will always change their state when a pulse occurs and retain that state until the next pulse. 10.2.8. General Comments on Flip-Flops When working with pulsed flip-flops in the design state, it is always assumed that the pulse will appear and be gone before the flip-flop can change states. This means that the present state of the flip-flop stays constant during the decision interval even though it may be going to the other state (and will be there) before the next pulse comes along. In order to ensure that this occurs, the designers of flip-flops will add internal states to the flip-flop so that the output is guaranteed to remain the same as long as the clock pulse is on. When the clock pulse returns to 0, the output of the flip-flop takes on the desired value. These flip-flops are called master-slave flip-flops, but when they are used in pulse circuits, they have the same excitation tables as described above. They have the advantage that a broad clock pulse can be used without fear of mis-operation. The use of such flip-flops in fundamental mode circuits requires caution and careful analysis of the excitation table. Another type of flip-flop that protects against wide clock pulses is the edge-triggered flip-flop. In this flip-flop, the leading edge of the clock pulse initiates the action. The clock pulse actually disables the inputs from having any further effect so that any changes on the inputs after the leading edge has passed will have no effect until the clock pulse goes away (and the next leading edge comes along). This results in an action similar to the master-slave action, except that the flip-flops go to state Qn+1 right away and we do not have to wait for the trailing edge of the clock pulse. When operating strictly in pulse mode, the action of all flip-flops, whether masterslave or edge-triggered, is the same. In mixed circuits, the actual action needs to be studied. While such circuits are not considered for general design, they are used in specialized types of circuits, like wave-shaping in multiclock systems. 10.2.9. Design Procedures The design procedure presented here becomes almost intuitive once the problem is properly organized. It consists principally of establishing a Table of Combinations over the total state domain (organized to relate very closely to the transition table) and then establishing the input conditions necessary to obtain the desired next state. Once this is done, the combinational circuits may be designed directly from the Table of Combinations. The procedure is now presented through several examples. The organization of the truth table is important as it reduces the effort involved while maintaining a simple and clear relationship to the transition table. The recommended organization for this table is shown in Figure 10.4. With regard to the switching function domain, the internal state is first, on the left, followed by the input state of level type inputs and then the input state of pulse type inputs (if multi-pulse design is being used). The domain is followed by the functions that drive the memory elements, the output, and finally the internal state variables again, but for time n+1.

193

Chapter 10: Sequential Circuit Design

All variables from the left up through the output will contain the values for the variables at time tn, while the next state columns contain the desired values of the internal state at time tn+1. The domain entries are entered in binary order. This results in the truth table being grouped by internal states at time tn and with the input states in binary order within each state. The entries of each row of the transition table are entered successively in the column for its respective internal state group.
<

tn Domain Functions Pulse Inputs (tn ) Level Inputs at t n

><

t n+1 Next Internal State

>

Internal States at t n

Flip-Flop Output Excitation (Each Terminal)

Figure 10.4. Recommended Layout, Design Truth Tables The design begins with filling in the domain entries, the desired output, and the corresponding next state (as assigned) from the state assigned transition table. The only remaining columns to be filled are the input variable functions for the memory elements. These are filled by noting, for each flip-flop (for each internal state), its condition at time tn and its desired condition at time tn+1. The entries are made in the control variable columns for that flip-flop to ensure the appropriate action. These entries are exactly those in the Device Excitation Table (Table 10.4.). When all the columns are filled, the only task remaining is the realization of the functions representing the input variables for the memory elements. Since all the functions have the same domain, the tagged Quine-McCluskey method can be utilized to obtain the minimum cost realization of the design. The Boolean equations for the input variables are called the excitation equations. 10.2.10. Fundamental Mode Consider the design of the sequential circuit for sample Problem No. 1 in Chapter 8. The assignment of = 0 and = 1 is used to provide the transition table in Figure 10.5 (from Figure 8.3). Since the circuit is fundamental mode, the D element will be used. The Design Truth Table is constructed in Figure 10.6. The domain of the switching functions will be the total state (at time tn) which is placed at the left in the table. The internal state is listed first, followed by the input states. The input function (to be designed) for the D terminal is next, followed by the output function (z). Both of these functions are based on the domain (at time tn). The last column in the table is the desired internal state at time tn+1, given the domain element at time tn.

194

Chapter 10: Sequential Circuit Design

Current State () 0 () 1

Transition Table Next State Output State Input State(x1,x2) Input State(x1,x2) 00 01 10 11 00 01 10 11 0 0 0 0 0 0 1 0 1 1 1 1 0 1 0 0

Figure 10.5. State Assigned Transition Table From Figure 8.3 All of the columns except the D column are filled in directly from the transition table in Figure 10.5. The excitation for the flip-flop is determined for each row by observing the state of the flip-flop at tn (yn) and its desired value at time tn+1(yn+1) and entering the excitation which will cause the desired action. This can be determined from the Device Excitation Table (Table 10.4). For D elements, it is particularly simple since the excitation must be the same as the desired next state. Column D is identical to column yn+1. Seq. State
() 0

tn yn 0 0 0 0 1 1 1 1 x1 0 0 1 1 0 0 1 1 x2 0 1 0 1 0 1 0 1 D 0 0 1 0 0 1 1 1 z 0 0 0 0 0 1 0 1

tn+1 yn+1 0 0 1 0 0 1 1 1

() 1

Figure 10.6. Design Truth Table, Fundamental Mode The (hazard-free) design, using the tagged Quine-McCluskey method, is carried out in Figure 10.7 showing the excitation equations and the resulting circuit.

195

Chapter 10: Sequential Circuit Design

fi(y,x1,x2) ==> D = (2,5,6,7) z = (5,7) 2D- 9 5Dz9 6D-9 7Dz9 (2,6) D - (4) A (5,7) D z (2) B (6,7) D - (1) C

2 6 0 10 * A 1 01 * B 110 * C X

D 5 7 X

6 7 X

z 5 7 X

p =(ABCD) DBz so D = x1 x 2+yx2+yx1, z = yx2


x 2

x 2

Figure 10.7. Hazard-Free Design, Fundamental Mode The circuit must be analyzed. This procedure means independently developing the table of combinations for columns D and z from the circuit and then using the entries in the flip-flop analysis table (Table 10.2) to establish the next state. The transition table can then be constructed to compare with the transition table with state assignments. The circuit should be checked against the reduced state graph. Note that if both x1 and x2 are low, all and gates are effectively disabled and y will be 0 (state ). If x2 goes high, the gates are held off by x1 and y. With y2 high, if x1 goes high, the gates are still held off by y and x 2. The only way the circuit can change to state is for x2 to be low and for x1 to be high. Once y is high, the two lower and gates are enabled by y, and the output will follow x2. If x1 goes high with x2 high, neither y nor z will change. Similarly, if x1 then goes low. If x2 goes low with x1 high, y will stay at 1 (though the output will go to 0). With x1 high,

196

Chapter 10: Sequential Circuit Design

the output will continue to follow x2. If x1 goes low when x2 is low, all gates are disabled and y goes to 0. The circuit performs in accordance with the reduced state graph. Essential hazards (yet to be discussed) cannot occur in two state transition tables, therefore the design is complete. Problem 10.1 Using state assignment a = 0 and b = 1, design circuits for the transition tables in Figures 9.8a and 9.8b using D elements. 10.2.11. Pulse Mode with Levels Input and One Synchronizing Pulse For this problem, we use sample Problem No. 2 from Chapter 7. State assignments have been made: a = 00, b = 01, c = 11, and d = 10, yielding the transition table in Figure 10.8 (from Figure 7.16). Current State (a) (b) (c) (d) 00 01 11 10 Transition Table Next State Output State Input (x) Input (x) 0 1 0 1 01 10 0 0 01 11 0 0 00 10 0 1 00 10 0 0

Figure 10.8. State Assigned Transition Table From Figure 7.16 Two J-K flip-flops are used, giving the Design Truth Table shown in Figure 10.9. Since the circuit has only one (synchronizing) pulse, the pulse is not included in the switching domain. The switching domain is entered with internal states, followed by the level input. The flip-flop input functions are next (2 sets of J-K input functions), followed by the output function, and finally the next internal state. Again, all columns except the J and K columns are filled directly from the transition table.
t - - - - - - - - - - - - -- - - tn - - - - - - - - - - - - - - - - -> -< - - - - - n +1 - - - > < y y x J J K2 y y K z 1 2 1 1 2 1 2 1 + 0 0 1 + (b) 0 0 0 0 + + 0 (d) 0 1 0 1 0 0 1 1 (b) 0 0 0 0 0 + 1 0 + 0 1 1 1 (c) 0 1 1 + 0 + 0 + 0 0 0 1 0 0 (a) + 1 + 0 1 0 + (d) 0 0 1 0 1 0 0 0 (a) + 1 + 1 1 0 1 1 0 (d) + 1 1 + 0 1 1 1

(a) (b) (d) (c)

Figure 10.9. Pulse Mode Design Truth Table - One Synchronizing Pulse The J1 and K1 columns are filled in observing y1 at tn and tn+1 and entering the values (from the Device Excitation Table, Table 10.4) that will cause the desired action. Similarly, J2 and K2 are filled in observing y2 at tn and tn+1 and entering the excitation values to create the desired action.
197

Chapter 10: Sequential Circuit Design

The Tagged Quine-McCluskey Method may be used to design the five functions required. However, with J-K flip-flops, there are many don't cares and with a reasonably good state assignment, the circuit can frequently be designed to be minimal cost directly from the Karnaugh Maps. Figure 10.10 shows the Karnaugh maps, the excitation equations and the resultant circuit. y1y2 x 00 01 11 10 0 0 0 + + 1 1 1 + + J 1= x y1y2 x 00 01 11 10 0 1 + + 0 1 0 + + 0 - J 2 = y1 x y1y2 x 00 01 11 10 0 + 0 1 + 1 + 0 1 + K 2 = y1 y1y2 x 00 01 11 10 0 + + 1 1 1 + + 0 0 K 1 =x y1y2 x 00 01 11 10 0 0 0 0 0 1 0 0 1 0 z = y1y2x

x _ x

J1

y1

_ y K C 1 1

2 2 C

y 2 _ y 2

Clock Pulse

Figure 10.10. J-K Circuit Design With Synchronizing Pulse Again, check the action of the circuit against the flow state graph to ensure that no errors have occurred. Note that the connection on flip-flop y1 is such that y1 will follow x. If x is high, then J1 will be high and y1 will be set on. If X is low, then K1 will be high and y1 will be set off. State a is represented by both flip-flops off. If x is low, x is high and with y 1 high, J2 will be high. If x is low, y2 will be set on, giving state 01 (b). If x is high, then x will be low which will hold J2 low. With y1 following x, the resultant next state is 10 (d). Consider state b (01). With y1 being low, K2 will be low and y2 cannot be reset or toggled off. y2 will remain set and y1 will follow x. If x is low, the next state will be 01 (the same) and if x is high, the next state will be 11 (state c).

198

Chapter 10: Sequential Circuit Design

Consider state c (11). With y1 high, K2 will be high and so y2 will either be reset or toggled off. With y1 following x, if x is low, the next state will be 00 (state a) and if x is high the next state will be 10 (state d). Consider state d (10). Again, K2 will be high. We now note that with y1 on, y 1 will be low, preventing J2 from being high. Hence, y2 will remain off, and with y1 following x, if x is low, the next state will be 00 (state a). If x is high, the next state will be 10 (state d). Thus, the circuit will perform properly. Problem 10.2. Design a circuit for the transition table in Figure 10.8 using D flip-flops. Pulse mode circuits can also be designed around R-S and T flip-flops using the techniques already discussed. The procedure used to design circuits with a single input pulse is identical, except that the design must assure that only pulses appear at the inputs to the flip-flops. (The cnf design obviously has an edge since the pulse can be entered as an input to the output and gate). If the input signals contain two or more pulses, the design table must include all of the input pulses in the domain section. However, since two pulses cannot occur at the same time, the design effort can be reduced by designing separate circuits for each input pulse using the internal and level input states as the domain. 10.2.12. Multi Pulse Design J-K flip-flops can be used as T flip-flops by tying both input leads high. However, it seems reasonable that if we were to permit the J and K leads to vary, there might be some reduction in design complexity. Generally, this will be the case and a method for that type of design is now proposed as a simple extension to the methods already presented. Flip-flops, such as the J-K and D flip-flops, must have level-type signals on the main input terminals and pulse-type signal on the C input. Whereas the total state is involved in the functions for the C inputs, the pulse-type signals cannot be a part of the domain for the switching functions to be applied to the level inputs. In the following, the term "levels state" will be used to denote that portion of the domain associated with level-type signals (internal states and level-type inputs). In general, a "levels state" may include several rows of the design table. For this type of design, a C column is added to the design table for each flip-flop for the development of the functions for the C inputs. The basic principle for this design procedure is based on the fact that a pulse is required on the C input only when a flip-flop must change state. This leads to an initial selection of 1 for all entries in the C column of the design table wherever the corresponding flip-flop changes state, and a + entry everywhere else. There exists the option of either allowing the pulse to come through, or of blocking it, permitting the level inputs to be don't cares. For example, a multi-pulse transition table might very well require, for a given internal state, that a flip-flop change its state for one pulse and remain the same for another. The standard design technique that allows all pulses on the C input would require that the level inputs to the flip-flops be different for one pulse than for another. However, since the domain for those circuits is only the "levels states," this is not possible. There must be some blocking of pulses to the C leads. The 1's in the C column cannot be blocked since the flipflop must change state and the only way this can occur is to have a pulse on the clock lead. This forces us to override the + entries in the C column with a 0 wherever such a conflict occurs, allowing the entries in the level input columns to be replaced with a + , thereby eliminating the conflict.

199

Chapter 10: Sequential Circuit Design

After the conflicts have been resolved, if there still remain + entries in the C column, the subsequent design procedure permits that the + entries be either 0's or 1's. There is some trade-off here. If the entries are 0's, then the corresponding entries in the columns for the flip-flop level inputs become don't cares, resulting in possible simplification of the design for those combinational circuits. A good state assignment will result in those circuits being quite simple anyway and, coupled with the fact that the C input circuits are generally the most complicated, the + entries in the C columns should be used to minimize the combinational circuits for the C inputs. The following procedure gives minimization priority to the combinational circuits for the C inputs. The designer can modify the process if it appears that the combinational circuits for the level inputs are becoming unnecessarily complicated. 1. Develop the design table as though all clock pulses were input to the C input. (This is the standard design procedure presented in the previous section.) 2. Fill in the columns for the clock pulses, placing a 1 wherever a change in the state of the respective flip-flop is required and a + otherwise, 3. For each "levels state," if both 1's and 0's appear in the design for a level input column, then a. Change the + entries in the corresponding C column to 0 for that levels state. b. Change the corresponding entries in the level input columns to +. 4. Design the C input circuit for minimal cost utilizing any remaining don't cares. 5. Wherever the resulting design for the C input circuit has a 0, place a + in the corresponding location of the associated level input columns. 6. Complete the design. The transition table in Figure 10.11 is for a circuit (with two input pulses p1 and p2) that continually monitors for the sequence in time: p1,p2,p1,p2. Current State (a) (b) (c) (d) 00 01 10 11 Transition Table Next State Output State Input (p1,p2) Input (p1,p2) 01 10 01 10 b(01) a(00) 0 0 d(11) c(10) 0 0 b(01) a(00) 1 0 d(11) a(00) 0 0

Figure 10.11. Multi-pulse Transition Table The Design Truth Table with the initial entries is shown in Figure 10.12. After the data are transferred from the transition table, the column C1 is filled in with 1's wherever y1 at time tn differs from y1 at time tn+1. Similarly, column C2 is filled in with 1's wherever y2 at time tn differs from y2 at time tn+1. The J and K columns are filled in the usual way, using the desired values from the Device Excitation Table for the state at tn going to the state at tn+1.

200

Chapter 10: Sequential Circuit Design

-< ---- t n+1- - - > <- - - - - - - - - - - - - - - t n - - - - - - - - - - - - - - - > y y P P J J K C 1 2 1 2 1 2 K2 C2 z y 1 y 2 1 1 + 1 + 1 0 0 1 (b) (a) 0 0 0 1 0 + + 0 + + 0 0 0 (a) 0 0 1 0 0 + 1 + 0 + 0 1 1 (d) (b) 0 1 0 1 1 + 1 0 (c) 0 1 1 0 1 + 1 + 1 1 0 1 1 + 1 1 0 1 (b) (c) 1 0 0 1 + 1 1 0 1 0 + 1 0 0 (a) 1 0 + + 0 1 1 1 + + 0 0 1 (d) + 0 + 0 1 (d) 1 1 1 0 + 1 1 + 1 1 0 0 0 (a) Figure 10.12. Initial Design Truth Table Including the Clock Pulse Lead

For each "levels state" (in this case for each internal state), if there are differing values for J1 for pulse input p1 than for pulse input p2, the + values in column C1 are set to 0. The associated 0 values in the J1 column are set to +. The same procedure is used with the column K1. The same process is performed for flip-flop y2. The resulting Design Truth Table is shown in Figure 10.13. > t n ><- -- t n+1 - - < y1 y2 p p J 1 2 1 K1 C1 J2 K2 C2 z y 1 y 2 + 1 + 1 0 0 1 (b) (a) 0 0 0 1 0 + /0 0 /+ + + 0 0 1 0 0 + + 0 0 0 (a) + / + / 0 0 0 1 0 1 1 + 1 + 0 1 1 (d) (b) 0 1 1 0 1 + 1 + 1 1 0 1 0 (c) 1 0 0 1 + 1 1 1 + 1 1 0 1 (b) (c) / + + +0 / 1 0 1 0 + 1 1 0 0 0 0 (a) + / + + / +0 1 1 0 1 + + 0 1 1 (d) / /0 0 0 (d) 1 1 1 0 + 1 1 + 1 1 0 0 0 (a) Figure 10.13. Final Design Truth Table After Corrections We now see the reasons behind the development of the Design Truth Table with regard to keeping the internal states first and the level type inputs next. This keeps the "levels states" grouped for scanning with respect to the conflicts when designing multi-pulse type circuits. Also, the simplicity of the design hinges on these internal states not having conflicts. We see the need for the application of Humphrey's Rule 2 in performing state assignments for multi-pulse circuits. The next step is to design the circuits for the C inputs. For each flip-flop, there will be two circuits designed, one for p1 and one for p2. Since p1 and p2 cannot happen at the same time, p1 will always be zero when p2 occurs, and p2 will always be zero when p1 occurs. Hence, p1 is not in the domain of the circuit involving p2, and p2 is not in the domain of the circuit involving p1. We can use a Karnaugh map over the total input state as shown in Figure 10.14, realizing that the case with p1p2 = 0 is of no interest and that p1p2 cannot occur. Or, we can use a simple Karnaugh map for p1 that involves only the levels states (in this case y1 and y2, also shown in Figure 10.14). In either case, the design for the circuits must yield an excitation equation of the form p1f1(y1,y2)+p2f2(y1,y2).

201

Chapter 10: Sequential Circuit Design

Having designed the C input circuits, it is now determined that the two don't cares in the C1 column will be 1's. This means that the J1 column 0's must be retained. The design of the J and K inputs for both flip-flops can now be completed as shown in Figure 10.15. Note that y2 is operated as a T flip-flop, but that y1 is not. The y1 flip-flop would have been operated as a T flip-flop if the don't cares in the C1 column had been set to 0's, allowing the J1 column to be set to don't cares. This would have yielded the same circuit as the standard design procedure would have yielded for a T flip-flop.
y y 1 2 p p 1 2 00 01 11 10 + 1 1 1 - +C1 = p 1+ p 2(y y2 ) 1 y y 1 2 0 1 0 0 1 1 y y 1 2 0 1 + 1 0 1 00 01 11 10 y y 1 0 0 1 + 1 1 1 1 y 2 y 1 0 0 1 + 1 1 1 0

C1 for p1

C1 for p2

p1 p1 and p2 CC and 1 for 1 for y1y2 p1p2 00 01 11 10 0 1 1 0 00 01 11 10

0 0 1

1 0 1

0 1 0

1 1 0

C2 for p1

C 2 for p2

- + C2 = p y + p y C = p + p (y y2 ) C2 for p1 and p 2 1 2 2 2 2 1 2 1 Figure 10.14. Karnaugh Maps for Multi-pulse Design

202

Chapter 10: Sequential Circuit Design

y y 1 2 0 1

0 0 1

1 + +

y y 1 2 0 1

0 + +

1 1 1

y y 1 2 0 1

0 1 +

1 1 +

y y 1 2 0 1

0 + 1

1 + 1

J 1 = y2

K1 = 1 J 2= 1 Figure 10.15. Karnaugh Maps for Design of Level Inputs

K2 = 1

p 1 J1 y 1 1 1 J2 y2 z

K1 y 1 C

K y2 2C

Figure 10.16. J-K Flip-Flops With Multi-Pulse Design


Problem 10.3. Design a circuit for the transition table in Figure 10.11 using R-S flip-flops. Problem 10.4. Design a circuit for the transition table in Figure 10.11 using J-K flip-flops as T flip-flops. Problem 10.5. Design a circuit for the transition table in Figure 10.11 using D flip-flops with controlled C leads. 10.2.13. Analysis of Sequential Circuits From time to time, we may be called to analyze a circuit that has already been designed. The simplest way to approach this problem is to work backwards, through the Design Truth Table to the transition table and state graph. The Design Truth Table contains the switching domain and all pertinent functions. The order of table completion is first to fill in the domain, then to fill in all functions by analyzing the combinational circuits. Finally, knowing the functions that control the flip-flops, we can determine, for each total state, what the next internal state will be. (For common flip-flops, we can use the Device Truth Tables in Table 10.2.) Once the states at tn+1 have been determined for all total states, the transition table can be formed, and from that (or also from the Design Truth Table) the state graph can be drawn. Problem 10.6. Starting with the circuit, develop the transition table and state graph for the circuit in Figure 10.7. Problem 10.7. Starting with the circuit, develop the transition table and state graph for the circuit in Figure 10.16. 10.2.14. Essential Hazards in Fundamental Mode When working with extremely high-speed circuits, there is a large number of problems caused by signal delays. One problem of this nature that occurs with fundamental

203

Chapter 10: Sequential Circuit Design

mode sequential circuits can be observed in the transition table and is called an essential hazard. This occurs when a delay in the signal to part of the circuitry affecting the establishment of a state results in the wrong state being established. If this situation exists, delays must be added to the faster part of the circuit to allow the slower part to effectively catch up to what has happened elsewhere. We will not go into detail regarding the resolution of this problem, since the resolution is related to the technology being used. But it is important to know that it does exist and how we can find those situations where they might occur from the transition table. Consider the transition table in Figure 10.17. Consider the case where the system is in state a with the input state = 00. If the input state changes to 01, then the circuit is to move to state b. However, if there is a delay in the input signal to the circuitry establishing the state d, that circuitry may see state b established before it recognizes the change in the input state. Under those circumstances, that part of the circuit is in row b column 00 and creates signals to move to state d. When the input signal does arrive, the system will be stable in state d. Even though a transition table may have circumstances where a delay could cause a problem, that does not mean that the delay will exist in the circuit developed. The analysis for essential hazards requires scanning the transition table for circumstances that could lead to errors, and going to the circuit design to see if it does indeed have delays that could result in reaching the wrong state. A complete examination of the table in Figure 10.17 shows a similar possible problem with state c, under input state 11, going to state b. No other problems of this nature exist, although there is a variety of interesting loops that might occur. A similar problem exists in the output section and we should examine the path traversed by the output signal to see if any sharp pulses might be generated. For example, if the system is in state d under input state 11, then if the input state changes to 10, the system will move to state c. However, with a long delay of the type discussed above, the circuits to generate the output could see the system in state c with input state 11. This would create a sharp 1 pulse in the output even though the system successfully moves to state c. Transition Table Current State a b c d Next State Input State (x1,x2) 00 a d a d 01 b b b d 10 c c c c 11 a d c d 00 0 0 0 1 Output State Input State (x1,x2) 01 0 0 1 1 10 0 0 0 0 11 0 0 1 0

Figure 10.17. Transition Table for Fundamental Mode


Problem 10.8. Examine the transition tables in Figure 9.9 for essential hazards. 10.3. Additional Sequential Circuit Design Problems Problem 10.9. Design a fundamental mode circuit using R-S flip-flops, to realize the transition table in Figure 9.8c.

204

Chapter 10: Sequential Circuit Design

Problem 10.10. Design a fundamental mode circuit using the D element to realize the transition table in Figure 9.9a. Problem 10.11. Design a circuit using J-K flip-flops to realize the transition table in Figure 9.10a. Problem 10.12. Design a circuit using D flip-flops to realize the transition table in Figure 9.10c. Problem 10.13. Use J-K flip-flops as T flip-flops to realize the transition table in Figure 9.10b. Problem 10.14. Use J-K flip-flops with multi-pulse design to realize the transition table in Figure 9.10b. Problem 10.15. Use J-K flip-flops as T flip-flops to realize the transition table in Figure 9.10d. 10.4. Concluding Remarks This text has introduced sequential circuits and their design through the following steps. I. Development of a state graph from the word problem. II. Set up the transition table from the state graph. III. Reduce the transition table to obtain a minimum row transition table. A. Find the M-sets. 1. The Huffman-Mealy Method. 2. The Compatible Pairs Method. B. Find the minimum coverage/closure C-sets. 1. The coverage table. 2. The Grasselli-Luccio Method. IV. Assigning states. A. Fundamental Mode. 1. First priority to race free design. 2. Second priority to economical design. B. Pulse Mode - priority to economical design V. Develop the excitation equations - final design. VI. Final Analysis (including, for fundamental mode, examination for essential hazars).

The problems have been simple and likewise, the resultant designs. For example, Problem 7.1-2 specified a simple R-S flip-flop. The resultant design would be referred to as a single-rail design since only the Q output is formed (Q could be formed through an inverter). Most R-S flip-flops, however, are of the double-rail design, providing both Q and Q through a circuit that is internally symmetric. The design of these circuits has usually been through heuristic or intuitive approaches. It is not necessary that this be the case, but the inclusion of the symmetry in the primitive state graph or transition table requires a bit of experience. Also, additional states must occasionally be added to ensure proper operation. Schematics for a few simple double-rail flip-flops have been included in the problems at the end of the chapter. Finally, there are many components available that can be used to develop sequential circuits with only simple connections and serve every bit as well as would an original design. One of the most useful components of this nature is the shift register. If the Q output of a shift register is fed back to the input, a loop is produced and every sequential circuit can be
205

Chapter 10: Sequential Circuit Design

viewed as a group of loops. Its operation can be defined in terms of loop lengths and loop entry. Frequently, the Q of each stage of a shift register is brought out to terminals and is available for use elsewhere in the circuit. If the Q output of a shift register is tied back to the input, and the register is cleared before the clock pulses are started, the output leads will all have the same signal (a simple square wave), each stage being shifted by one clock pulse. This connection is called a Johnson Counter and will have only one output changing at a time. An n stage counter will have 2n states (as opposed to 2n for a binary counter). There are also a wide variety of standard counters available that can be used in several ways. Design around these types of components and many other interesting devices is left to future courses. 10.5. Additional Problems for Chapter 10 Problem 10.16. Given the following transition table (with states assigned), design a circuit. a. using the D element concept b. using an R-S flip-flop Current State (a) 0 (b) 1 Next State Input state(x1,x2) 00 01 10 11
0 0 1 1 0 0 1 1

Output State Input state(x1,x2) 00 01 10 11


0 0 0 1 0 0 1

Problem 10.17. Design a circuit using the D element concept to realize the following transition table (use the state assignments as shown).

Current State (a) (b) (c) (d) 00 01 11 10

Transition Table Next State Output State Input (x) Input (x) 0 1 0 1 00 01 0 11 01 1 11 10 1 00 10 0

206

Chapter 10: Sequential Circuit Design

Problem 10.18. Resolve the races in the following transition table, and fill in the output section assuming speed is not important. Then design a circuit to realize the result using the D element concept.

Transition Table Current State a b c d Next State Input State (x1,x2) 00 a a d d 01 b b b 10 c c c c 11 b b 1 00 0 0 1 0 Output State Input State (x1,x2) 01 10 11

Problem 10.19. Resolve any races and design a circuit to realize the following transition table (with states assigned), using the D element concept.

Transition Table Current State (a) 00 (b) 01 (c) 10 (d) 11 Next State Input State (x1,x2) 00 00 10 10 10 01 01 01 11 11 10 10 01 10 01 11 00 00 00 11 00 0 0 1 Output State Input State (x1,x2) 01 0 0 0 0 10 0 0 0 0 11 0 0 0 1

Problem 10.20. Resolve the races in the following transition table, and fill in the output section assuming speed is not important. Then design a circuit to realize the result using the D element concept with the state assignments as shown.

Transition Table Current State (00) a (01) b (11) c (10) d Next State Input State (x1,x2) 00 a a c a 01 b b d d 10 c c c 11 a a a d 1 1 00 0 0 0 0 Output State Input State (x1,x2) 01 10 11 0

207

Chapter 10: Sequential Circuit Design

Problem 10.21. Design a circuit using the D element concept to realize the following completed transition table.

Transition Table Current State (00) a (01) b (11) c (10) d Next State Input State (x1,x2) 00 a a c c 01 b b d d 10 c c c 11 a b b a 00 0 0 1 1 Output State Input State (x1,x2) 01 0 0 1 1 10 0 0 0 11 0 0 0 -

Problem 10.22. Given the primitive table below, design a circuit that will produce the desired output.

Transition Table Current State (00) a (01) b (11) c (10) d e f Next State Input State (x1,x2) 00 a a a a 01 b b b f f 10 c c c c 11 d e d e d 0 00 0 0 0 1 1 Output State Input State (x1,x2) 01 10 11

Problem 10.23. Fill in the table of combinations for the excitation functions to produce the desired transitions.

Current State (a) 00 (b) 01 (d) 10 (c) 11

<--------------------tn -------------------> J1 K1 D2 z y1 y2 x 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 1 1 0 1 0 1 1 0 0 1 1 1

<-------tn+1 ------> y1 y2 State 0 1 (b) 1 0 (d) 0 0 (a) 1 1 (c) 0 1 (b) 0 0 (a) 0 0 (a) 0 1 (b)

208

Chapter 10: Sequential Circuit Design

Problem 10.24. Design a circuit to realize the following pulse mode transition table (there is a level input and a clock pulse input). Use a J-K flip-flop for y1 and a D flip-flop for y2.

Current State (a) (b) (c) (d) 00 01 10 11

Transition Table Next State Output State Input (x) Input (x) 0 1 0 1 00 01 0 0 00 10 0 0 11 01 1 0 11 00 0 1

Problem 10.25. Design a circuit to realize the following pulse mode transition table (there is a level input and a clock pulse input). Use J-K flip-flops.

Current State (a) (b) (c) (d) 00 01 11 10

Transition Table Next State Output State Input (x) Input (x) 0 1 0 1 b c 0 0 b d 0 0 a d 0 1 a c 0 0

Problem 10.26. Design a circuit to realize the following pulse mode transition table (there is a level input and a clock pulse input). Use a J-K flip-flop for y1 and D flip-flop for y2.

Current State (a) (b) (c) (d) 00 01 11 10

Next State Input (x) 0 1 00 10 01 11 00 10 01 11

Ouput z n 0 0 1 1

209

Chapter 10: Sequential Circuit Design

Problem 10.27. Design a circuit to realize the following pulse mode transition table with RS flip-flops. (There is a clock pulse.)

Current State (a) (b) (c) (d) 00 01 10 11

Transition Table Next State Output State z Input (x) Input (x) 0 1 0 1 01 10 0 1 00 10 0 1 01 11 1 0 11 00 0 0

Problem 10.28. Design a circuit with J-K flip-flops to realize the following transition table.

Transition Table Current State (00) a (11) b (10) c (01) d Next State Input State (x1,x2) 00 11 00 01 10 01 10 01 00 11 10 01 10 11 00 11 00 11 10 01 00 0 0 1 Output State Input State (x1,x2) 01 0 1 1 10 0 0 1 11 0 0

Problem 10.29. Design a circuit with J-K flip-flops to realize the following transition table.

Transition Table Current State (a) 00 ( b) 01 ( c) 10 ( d) 11 Next State Input State (x1,x2) 00 00 01 11 01 01 10 01 10 10 00 00 11 10 11 01 10 10 00 0 0 0 Output State Input State (x1,x2) 01 0 0 1 1 10 0 1 0 0 11 0 0 1 -

210

Chapter 10: Sequential Circuit Design

Problem 10.30. Develop a circuit to realize the following transition table. Flip-flop x is to be a J-K flip-flop, y is a T flip-flop and z is a D flip-flop.

Transition Table Current Next State Ouput Z State Input (x) xyz 0 1 n a 000 b d 0 b 010 c a 0 c 100 a d 1 d 001 b e 1 e 011 a a 1
Problem 10.31. Design a circuit using a multi-pulse design technique with J-K flip-flops to realize the following transition table.

Current State (a) 0 (b) 1

Transition Table Next State Ouput z1,z2 Input (p1,p2) Input (p1,p2) 01 10 01 10 0 1 00 00 0 1 01 10

Problem 10.32. Use T flip-flops to realize the following transition table.

Current State (a) (b) (c) (d) 00 01 10 11

Transition Table Next State Ouput z Input (p1,p2) Input (p1,p2) 01 10 01 10 00 01 0 0 10 11 0 0 00 01 0 01 00 1 0

Problem 10.33. Use T flip-flops to realize the following transition table.

Current State (a) (b) (c) (d) 00 01 10 11

Transition Table Next State z 1 z2 z3 Input (p1,p2) Input (p1,p2) 01 10 n 01 10 01 00 0 0 0 00 10 0 0 0 11 01 1 0 1 00 01 1 1 0

211

Chapter 10: Sequential Circuit Design

Problem 10.34. Use multi-pulse design techniques to develop a circuit to realize the transition table in Problem 10.32. (Use J-K flip-flops.) Problem 10.35. Use multi-pulse design techniques to develop a circuit to realize the transition table in Problem 10.32. (Use D flip-flops.) Problem 10.36. Given the circuits below, set up the analysis Table of Combinations and develop a state graph and a transition table to describe its operation.
R S Q R S b. a. Q Q

Q K CP Q

S c.

Q J

d.

x1 Y 1 Z x2 x 2 Y

x1

Y2 x 1 e. x1 f.

g. The W-Z Circuit

212

Chapter 10: Sequential Circuit Design

Problem 10.37. Given the circuits below, set up the analysis Table of Combinations and develop a state graph and a transition table to describe its operation. (Assume clock pulses into the C terminals.)
Y1 Y2 D2 Y C2 2

D1 C1 CP

a. Johnson Counter X1 J1 K1 X X1 Y1 Y1 Z J2 J2 K 2 c. b. Y2 K 2 Y2 Y2 J1 K1 Y1 Y1 Z

X J Y2 Z

J1 K1

Y1 Y1

2 2

CP d.

213

Chapter 10: Sequential Circuit Design

Problem 10.38. Given the circuits below, set up the analysis Table of Combinations and develop a state graph and a transition table to describe its operation. (Assume clock pulses into the C terminals.)

X J1 K1 y 1 y1 Z

X D2

y2 y2

a.

X X

Y 1 Y1

K1

Y 2 Y 2

K2

b.

214

Chapter 11: Design of Iterative Circuits

11.
11.1.

Design of Iterative Circuits

Introduction There is a subset of combinational circuits that can be designed quickly and efficiently using the techniques presented for the design of sequential circuits. The resulting circuits are distributed in the sense that each of several identical circuits solves a part of a problem and when connected together, solve the total problem. Circuits designed in this way are particularly easy to incorporate into integrated circuits through "step and repeat" equipment. Typical circuits that can be designed using this technique include: Parity Checking Circuits N Out Of M Inputs High (Or Low) Circuits N Adjacent (Geographically) Inputs High (Or Low). These circuits have the common attribute that a set of input variables may be processed in a circuit, called a cell, along with information regarding previously processed variables from the "last cell," producing an identical form of information for use by the "next" cell. This concept is illustrated in Figure 11.1. th n Input Set X(n) First Input Set X(1) Last Input Set X(1) .. .. .. Y(1) Y(n+1) Y(1) Y(2) Y(n) Z Cell 1 Cell 1 Cell n

Information Information Figure 11.1. The General Form for an Iterative Circuit
The input variables to the circuit, represented here as xi, are separated into equal sized sets, X(j) and each of the sets becomes the input to one of the cells. Each cell operates on its set of input variables along with the information passed to it from the previous cell to produce an identical form of information for the next cell. The information is coded in variables represented here as yi in the set Y(j). These information variables represent the information states and for design purposes are analogous to the state variables of the sequential circuits studied in previous chapters. Since the output of the circuit may be different from the representation of the information variables, the output cell may be modified in design. Also, since the input state starts off with zero information, the first cell may take on a modified design either to simplify that cell or to reduce the total number of cells required. The key to the design process lies in defining the information states so that all intermediate cells will have identical information input and thus have the same design. The resultant "states" of information are now more of a spatial nature than temporal (as in sequential circuits). However, we may think of the input information state as the "current" state and of the output information state as the "next" state. There is a delay through each circuit, but the delay is ignored in the design process. In practice, the output can only be sampled after all changes in the input have had time to propagate through the circuit. The design process starts with an intermediate cell and then the end cells may be modified or simplified depending on the cell design and the output circuit requirements. The intermediate cell design can be broken down into the following steps: Determine the number of inputs to be processed by each cell.
215

Chapter 11: Design of Iterative Circuits

Determine the information states that must be used to communicate the information of previously processed cells to the "next" cell. Construct a transition table with the information state received from the previous cell as the current state on the left and valid input state across the top. In each cell, place the information state (next state) to be passed on to the next cell as a result of each input state occurring with each input information state. In the output state part of the transition table, place the output desired from the last cell if the yn+1 state were yn. Reduce the transition table if possible. (The techniques of Chapter 8 apply to these transition tables as well. In general, reduction will only occur if there has been some gross oversight in defining the information states.) Select a state assignment for the information states. (The techniques of Chapter 9 apply here as well.) 11.2. Parity Checking Circuits Consider first a circuit for checking parity over a total of nine inputs. We assume that we want to have a single output which is to be high if and only if an odd number of inputs are high. Let us assume also that we want to construct the circuit with and-or-not logic. Thinking now in terms of the information required from previous cells, we can use the properties of odd and even, and see that if we know the odd/even character of all the previous cells inputs. If we determine the odd/even character of the inputs to this cell, we can pass to the next cell the information regarding the odd/even characteristics of all the previous cells, including the inputs to this cell. A single variable is all that is needed to carry this information. Since we want the same type of information at the output, we observe at this time that a state assignment of 1 = odd, 0 = even might be advantageous. We now consider the number of inputs (not including the state variable) that can be brought into each cell. We note that the first cell y (state) input could actually be the signal on one of the inputs, and that the first cell would then be the same as all the intermediate cells. It is a property of most problems that the information states will be the same regardless of the number of inputs to each cell. Consider first the case for one input per cell. 11.2.1. Parity Checking - One Input/Cell The transition table is generated with the information states represented by the state variable yn on the left and the input states across the top. There are two information states, 0 (for even) and 1 (for odd). There are also two input states for xn, 0 and 1. The transition table is constructed with next state entries for yn+1. Since we have already made the state variable assignment for 1 = odd and 0 = even, we enter the total result for previous entries plus the input xn. If yn is 0, then if xn is 0, yn+1 will be 0 and if xn is 1, yn+1 will be 1. If yn is 1, then if xn is 0, yn+1 will be 1 and if xn is 1, yn+1 will be o. The result is shown in Figure 11.2a. We now construct a design table that automatically becomes the Table of Combinations for the design of the combinational network. This is shown in Figure 11.2b. The result is, of course, the exclusive, or: y nxn + yn x n.

216

Chapter 11: Design of Iterative Circuits

State yn 0 1

yn+1 for Input State xn 0 1 0 1 1 0

Output State z 0 1

yn xn
0 0 1 1 0 1 0 1

y n+1
0 1 1 0

Figure 11.2a. Transition Table Figure 11.2b. Design Table Figure 11.2. One-Input Parity Checking Iterative Circuit Design
11.2.2. Parity Checking - Two Inputs/Cell The information state variable is the same as defined for the previous case. However, the two inputs now require a four-column transition table as in Figure 11.3a. For yn=0, the entry in each cell is a 1 if either x2n or x2n+1 is high (but not both), and 0 otherwise. If yn = 1, then the entries will be a zero if x2n or x2n+1 is high (but not both) and a one otherwise. Figure 11.3 shows the resultant transition and design tables.

State yn 0 1

yn+1 for Input State x2n,x2n+1 00 01 10 11 0 1 1 0 1 0 0 1

Output State z 0 1

y x2n n 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1

x2n+1 y n+1 0 0 1 1 1 0 0 1 1 0 0 1 0 0 1 1

Figure 11.3a. Transition Table. Figure 11.3a. Design Table. Figure 11.3. Two-Input Parity Checking Iterative Circuit Design The resultant design is yn+1 = y n x 2nx2n+1 + y nx2n x 2n+1 + yn x 2n x 2n+1 + ynx2nx2n+1. Comparing designs, we see that eight 1-input cells or four 2-input cells will be required. Using Criterion 3 from Chapter 5, the relative costs would be 6*8 = 48 for the oneinput design and 16+4 = 64 for the two-input design. Problem 11.1. Design a 3-input cell structure for the above circuit specification. Problem 11.2. What would be the cost for a circuit using 4-input cells? An 8-input cell? Problem 11.3. Assume cells are to be designed around exclusive or gates. Draw the complete (9-input) circuit for 1-input, 2-input, 3-input and 4-input cell realizations

217

Chapter 11: Design of Iterative Circuits

11.2.3. N Out Of M Circuits Consider now a circuit which has nine inputs and exactly two must be high at all sampling times. Design an iterative circuit that will have a one output if and only if exactly two inputs are high. Consider 1-input and 2-input cells. Starting with an intermediate cell, the conditions presented might be as follows: State a: No (previous) inputs are high. b: Exactly one (previous) input is high. c: Exactly two (previous) inputs are high. d: More than two (previous) inputs are high. These information states are also independent of the number of inputs per cell. The situation is a bit more complicated than with the previous circuit designed and we will put off state assignment until we have reached a point where states must be assigned. Figure 11.4 shows the transition tables for both the one-input and two-input designs.

State yn a b c d

Yn+1 for Input xn 0 1 a b b c c d d d

Output z 0 0 1 0

State yn a b c d

Yn+1 for Input x2n+1. 00 01 10 a b b b c c c d d d d d

11 c d d d

Output State z 0 0 1 0

Figure 11.4a. One-Input Figure 11.4b. Two-Input Figure 11.4. Transition Tables for a 2-out-of-9 Iterative Circuit The table will not reduce and so we move on to state assignment. Because of the cyclic nature of these tables, state assignments following the Gray code will be selected. The final design is left as an exercise for the student. Problem 11.4. Set up the design tables and design the intermediate cells for the transition tables in Figure 11.4. Problem 11.5. Design a simplified first cell of the one-input design for the 2-out-of-9 circuit. Problem 11.6. Design a simplified first cell of the two-input design for the 2-out-of-9 circuit. Problem 11.7. Design a simplified last cell for the one- and two-input designs for the 2-outof-9 circuit. Problem 11.8. Compare total circuit costs for the one- and two-input realizations for the 2out-of-9 circuit. Problem 11.9. Develop transition tables for the intermediate cells of a 3-out-of-10 circuit. a) for one-input cells, b) for two-input cells. 11.2.4. N Adjacent Inputs High Consider a terminal strip with 16 inputs and a system that requires that the signals coming into this strip are valid if and only if the signals are high on even numbers of adjacent terminals. For example, no terminals high or terminals 3,4,7,8,9 and 10 high would both be valid input signals. Terminals 3,4,7,8 and 9 would not be valid because the second group of high inputs is not even. Design an iterative circuit that would have a one output if the signal

218

Chapter 11: Design of Iterative Circuits

was invalid and a zero output otherwise. Again, consider one-input and two-input cell designs. State yn Output yn+1 for State yn+1 for Output State Input x2n,x2n+1. yn State Input xn 00 01 10 11 z 0 1 z a a b d c 0 a a b 0 b d d a b 1 b d c 1 c a b d c 0 c a b 0 d d d d d 1 d d d 1 Figure 11.5a. One-Input Figure 11.5b. Two-Input Figure 11.5. Transition Tables for an Even Number of Adjacent Inputs High

For this problem, it is necessary to think in terms of groups of adjacent high terminals. To know when a failure has occurred, we must know when a group ends with an odd number of inputs high. A group begins if the first input is high, or when an input is high following a low input, and ends when the next adjacent input is low following a high input, or when the next adjacent input is the last input. A set of states might be: State a: All groups (if any) have terminated with an even number of inputs. b: All groups thus far have been valid and the number of adjacent high inputs coming into this cell is odd. c: All groups thus far have been valid and the number of adjacent high inputs coming into this cell is even. d: A failure has occurred in that a group has terminated with an odd number of inputs high. The primitive transition tables for this definition are shown in Figure 11.5. Problem 11.10. Reduce the primitive transition table, develop a state assignment and complete the circuit design for the two transition tables in Figure 11.5. Problem 11.11. Design an iterative circuit that will output a 1 if all groups of adjacent high inputs contain a multiple of three inputs. Problem 11.12. Design an iterative circuit that will output a 1 if and only if there is exactly one group of adjacent high inputs.

219

Appendices

A. Appendices
A1. A2. A3. Number Systems Relationships between Mathematical Logic and Switching Theory Relationships between Probability and Switching Theory

220

Appendix I: Number Systems

A1. Appendix I. Number Systems


Numbers in the "counting" number systems have the following representation: Nb = Cibi + Ci-1bi-1 + --- C2b2 + C1b1 + C0b0 + C-1b-1 + C-2b-2 + --where b is the base of the number system. When all work is done in the same system, the bj are understood and generally not shown. For example, 8210 = 8101 + 2100 = 80 + 2 The best representation of counting number systems is the mechanical counter, like the odometer on a car. 2 2 2 1 1 1 1 1 1 3 1 1 1 0 0 0 O O O O O O F F F 7 7 7 9 2 9 9 E E E Base 2 Counter 00000 00001 00010 00011 00100 00101 00110 00111 01000 01001 01010 01011 01100 01101 01110 01111 10000 10001 etc. Base 8 Counter 000 001 002 003 004 005 006 007 010 011 012 013 014 015 016 017 100 101 Base 10 Counter 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 Base 16 Counter 000 001 002 003 004 005 006 007 008 009 00A 00B 00C 00D 00E 00F 010 011

221

Appendix I: Number Systems

Multiplication by the base number shifts all numbers one digit to the left. Shifting a number left by one digit is comparable to multiplication by the base number. Division by the base number shifts all numbers one digit to the right. Shifting a number right by one digit is comparable to division by the base number. A1. 1 Conversion from One Base to Another The integer part of a number can be written as: Cibi + Ci-1bi-1 + ---- C1b + C0 Cj < b Dividing by b => Cibi-1 + Ci-1bi-2 + --- + C1 with remainder Co Dividing again => Cibi-2 + ----------------- + C2 with remainder C1 Dividing again => Cibi-3 + ----------------- + C3 with remainder C2 For example, convert 1210 to base 2 (2 in base 10 is 2). 12 2 = 6 remainder 0 6 2 = 3 remainder 0 3 2 = 1 remainder 1 1 2 = 0 remainder 1 So, 1210 = 11002 Convert 11002 to decimal (10 in base 2 is 1010). 1100 1010 = 1 remainder 10 but 102 = 2 1 1010 = 0 remainder 1 but 12 = 1 So, 11002 = 1210 The fractional part of a number is written as C-1b-1 + C-2b-2 + ---- C-kb-k Multiplication by b will result in C-1 + C-2b-1 + -----If C-1 is subtracted and the remainder is multiplied again by b, the result will be C-2 + C-3b-1 + ------, etc. Convert 0.12510 to base 2 (2 in base 10 is 2) 0.125 2 = 0.250 0.250 2 = 0.500 0.500 2 = 1.000 So 0.12510 = .0012 Convert .0012 to decimal (10 in base 2 is 1010) 0.001 1010 = 1.010 => 1 0.010 1010 = 10.10 => 2 0.100 1010 = 101.0 => 5 So 0.0012 => 0.12510 A1. 2. Conversion Between Binary And Octal Octal notation for binary numbers has an advantage over decimal because of the direct conversion. Three binary bits can count from 0 to 710. We can partition a binary number into groups of three starting from the right. Each group of three is converted into

222

Appendix I: Number Systems

their decimal (octal) equivalent. For example: Convert the following 16-bit binary number to octal. 0100111111000011 => 0 100 111 111 000 011 Binary => 0 4 7 7 0 3 Octal Conversions of octal to binary requires converting each decimal (octal) number into its binary equivalent. A1. 3. Conversion Between Binary and Hexadecimal. Hexadecimal notation (base 16) is similar to octal in that there is a simple direct conversion between binary and hexadecimal notation. Since four binary bits can represent numbers 0 to 15, which are denoted as 0-9, A, B, C, D, E, and F, we partition the binary number into groups of four, starting from the right. Each group of four is then converted to its hexadecimal equivalent. For example, convert the binary number in the previous example to hexadecimal. => 0100 1111 1100 0011 4 F C 3 Conversion back to binary is accomplished by replacing each hexadecimal digit with its binary equivalent.

223

Appendix I: Number Systems

A1. 4. Some Important Numbers Binary Octal Decimal Hexadecimal Comment 1 2 10 2 2 2 22 100 4 4 4 3 2 1000 10 8 8 24 10000 20 16 10 25 100000 40 32 20 26 1000000 100 64 40 27 10000000 200 128 80 (28-1) 11111111 377 255 FF (1byte) 28 100000000 400 256 100 9 2 1000000000 1000 512 200 210 10000000000 2000 1024 400 = 1K 14 2 100000000000000 40000 16384 4000 = 16K (216-1) 1111111111111111 177777 65535 FFFF 64K-1 16 2 10000000000000000 200000 65536 10000 64K A1. 5. Two's Complement Notation Representation of numbers in a computer must include a technique for negative numbers as well as positive numbers. There are many ways of doing this. One simple way would be to append a sign bit. Another would be to offset zero by one-half the range. Each of these has advantages, but the final selection will be made weighing all the pros and cons. Generally, the winner is a rather clever method known as two's complement. In this system, the high order bit will indeed represent the sign, with positive numbers having a 0 as the most significant bit and negative numbers having a 1 as the most significant bit. If a system has n bit capacity, then the n bits can hold a number of size 2n-1. We shall designate 2n as Cn since a number this size would require an additional bit that we may think of as a carry bit. Since the most significant bit is to be a 0 for positive numbers, then the largest positive number that can be used is 2n-1-1. For example, with 8 bits, Cn - 256, and the largest possible positive number that can be stored is 27-1 = 127. With 16 bits, the largest positive number is 215-1 = 32,767. With 32 bits, the largest positive number is 231-1 = 2,147,483,647. In two's complement, positive numbers are stored directly; they will always be less n-1 than 2 . Negative numbers are stored as 2n minus their absolute value. For example, with 8 bits of storage, the negative numbers storable will be from -1 to -128 and their representation in the machine will be from 255 down to 128. Another way of looking at this is that the negative numbers are stored in reverse order above the largest positive number. Since, for example, with 8 bits, we have 0 to 127 being stored as positive numbers, 128 to 255 would be stored as the negative numbers -128 to -1. This scheme has several advantages. Notice that the number 0 has the unique representation that all bits = 0. The collating sequence of the binary numbers within the
224

Appendix I: Number Systems

negative and positive groups of numbers is the same as the binary collating sequence. The only disadvantage is that negative numbers carry a larger binary equivalent than the positive numbers. It is also convenient that -1 can be used to set up a binary mask of all 1's. The principle advantage however, lies in the simplicity of doing algebraic addition and subtraction. A1. 6. Addition And Subtraction With Two's Complement. The method for portraying negative numbers shown above may seem a bit complicated, but it actually simplifies the addition/subtraction process. To show this we use the following notations: 2n = Cn 0 X 2n-1-1 If X is positive, then X is stored as X 0< X 2n-1 If X is negative, then X is stored as 2n - X Consider X and Y as inputs to a binary summing operation over all bits including the sign bit. That is to say, we have an adder which does straight binary addition. We also assume that we can detect a carry if the result of the addition yields a number >2n. We will also need to detect if the most significant bit (the sign bit) is 0 or 1. ACTUAL MACHINE RESULT X Y X Y X+Y X Y X+Y=X+Y A. pos pos We note that if X + Y >2n-1 then we have exceeded the range of valid positive numbers. This means that an overflow indicator must be turned on. In this case, the result will always appear to be a negative number, and the sign bit from the add operation may be used to signal an improper sum when both numbers were originally positive. X 2n - Y 2n+X - Y B. pos neg B.1. X>Y: The result is 2n + X - Y which must always have a carry on the high order (sign) bit, but the right answer X - Y remains in the register. Since a valid positive number and a valid negative number can never exceed the range of valid numbers on addition, there can never be an overflow, and we ignore the carry. B.2. X = Y: The result is 2n + 0. There will be a carry, but the answer in the register will be 0 which is correct. So again, we can ignore the carry. B.3. X < Y: The result is 2n - Y - X. The answer, negative and in proper machine form, is correct. There will be no carry. We see that if the augends have different signs, the correct answer for algebraic addition will always be obtained by the binary adder circuitry, that the carry bit is not needed, and the overflow indicator should never be turned on. 2n - Y 2n + 2n - X - Y C. neg neg 2n - X Here we have 2n + [2n - X + Y]. We must have a carry bit at the high order bit position if the answer is to be correct. For the moment, we would say the overflow light should be turned on if a carry does not occur. However, we see this is not a problem since two negative numbers added must overflow. However, we see that large negative numbers can result in the register containing a number that is less than 128; that is, the sign bit is zero. Therefore, an overflow indication must be turned on if the sign bit is not a 1 at the end of the cycle.
225

Appendix I: Number Systems

As a result, we see that algebraic addition under two's complement is performed by ordinary binary adder circuits. Carry and sign bits must be monitored, however, and circuitry added to detect overflow. Subtraction can be performed by first finding the two's complement of the subtrahend and then performing the algebraic addition. It only remains to find the two's complement of a number. Consider the case for n = 8. n 2 = 1 0 00 0 0 0 0 0
Y = 0 x y 1 0 1 1 0 (7 bit number - absolute value) __ n (8 bit machine representation) 2 - Y = 1xy01010 If we perform a standard subtraction, we see that starting from the right, leading zeros come through to the two's complement. The first 1 comes through as a 1, and thereafter each digit is complemented. A sequential circuit that receives the digits' least significant bit first, and has 1 bit of memory to remember, whether or not a 1 has been detected, can be used. Note that finding the 2's complement of a negative number returns the absolute value of that number (that is, re-complements it). Ranges Involved with 2's Complement Addition (8 bit example)
-1 -128 127 0 -1 28 + 255 valid 28 + 128 28 + 127 28+ 0 255 invalid -128 127 128 127 valid valid

valid

invalid

0 decimal number

0 machine number addition both pos. addition one pos. one neg. addition both neg.

226

Appendix II: Relationships between Mathematical Logic and Switching Theory

A2. Appendix II. Relationships Switching Theory

between

Mathematical

Logic

and

Logic deals principally with determination of the truth quality of statements based on the assumption that other statements are true. There is an implication operator called the conditional connective and written A B, which means A implies B. In logic this means that if A is true, then B is true. It does not imply any quality of truth to either A or B. Note that it is transitive. From a set theory point of view, this means that A B. In probability, this means that if A occurs, then B also occurs, or also, that A cannot occur without B occurring. It does not imply that if B has occurred that A has occurred, nor does it imply that either A or B has occurred. It does imply that if B has not occurred, then neither has A occurred. In mathematical theorem proving, if A B, then it can be shown that a valid theorem always has a valid contraposition theorem, B A. In logic, a statement which is true by virtue of its logical structure is called a tautology. The statement "either P or P has occurred" is a tautology and is represented as P + P. Tautologies are generally proved using the following two methods of proof: 1. Substitution: (using the equivalence relation) 2. Influence: if x and if xy, then y. All tautologies can be proved from the following four tautologies: (Principle of Tautology) 1. p + p p (Principle of Addition) 2. p p + q (Principle of Permutation) (Commutativity) 3. p + q q + p 4. (pq)[(r+p)(r+q)] (Principle of Summation) Also, you can always develop proofs with truth tables (called the method of perfect induction). The truth tables in logic are almost identical to the Table of Combinations in switching theory. There is a difference, however, since one is examining the validity of a statement based upon the validity of other statements. Their use requires first determining the domain of the variables involved (the same as with switching theory, except that these variables are now imbedded in the statements of fact) and then using the operators in the statements to establish the validity of the various conclusions drawn from them. For example, AB has the following truth table. (Let 1 mean a statement is true and 0 mean a statement is false.) A B AB 0 0 1 0 1 1 1 0 0 1 1 1 This table comes about because a statement is assumed true until proven false. AB, then if A is true, B must also be true, hence A = 1, B = 0 cannot be true and is therefore false. If A is false (0), then B as 0 or 1 is still possibly true. If we were to say in switching theory, that if A is high then B is high, the implication is that B is a function of A and possibly other variables, for example x. The statement is used to help define the function for the case when A is high and we might begin a truth table as follows:
227

Appendix II: Relationships between Mathematical Logic and Switching Theory

X A B 0 0 --0 1 1 1 0 --1 1 1 We can mix switching theory and logic in a productive way. For example, consider A and B as two input signals. Then the statement "if A is high, then B is high" provides information regarding the domain of the switching function. Indeed, the don't care function can be established from statements such as this. If this were the only statement, the truth table for the don't care function would be as follows: A B dc 0 0 0 0 1 0 1 0 1 1 1 0 This is the complement of AB in the logical truth table for AB. From this type of statement, it can be seen that the switching theorist must be very careful. Logic forms the basis for communication between people and the ability to convert verbal or written statements into the symbolism used in switching theory requires a clean conceptual understanding of the use of our verbal and written languages. There are additional "laws:" Law of contraposition : if p q then q p (mentioned above) Hypothetical Syllogism : [(pq) (qr)] (pr) p+q pq DeMorgan's Laws : Distributivity Associativity : : : : pq p+q p (q+r) = (p q)+(p r) p+(q r) = (p+q) (p+r) p+(q+r) = (p+q)+r p(qr)=(pq)r

Given that pq is true q p is the contrapositive and is always true then: q p is the converse and may or may not be true p q is the inverse and is true if the converse is true If p q and q , p then the equivalence relation exists and substitution may be used.

228

Appendix III: Relationships between Probability and Switching Theory

A3. Appendix III. Relationships between Probability and Switching Theory


Boole originally used the pictures of intersecting circles to illustrate concepts of probability. Today those concepts are extended into Karnaugh maps (or Veitch Diagrams) to illustrate the principles of switching theory. Since almost all graduating engineers have been exposed to Karnaugh maps, we will use these maps and theorems of switching theory as an aid in the review of probability theory and its theorems. Consider a Karnaugh map for one variable, x. If we place in each cell the probability of the occurrence of the phenomena represented by that cell, then we have a global picture of the probability of x occurring or of x not occurring. x 0 1 y 0.6 x 0.2 0.3 0
x 0.4 1 0.3 0.2

Consider a Karnaugh map of two variables. We now find a "magic square" type of situation. Each cell that represents a minterm in switching theory now represents the probability of the occurrence of an event involving two variables. The cell xy contains the probability that both x and y will occur. The cell x y contains the probability that x will not occur with y occurring, etc. The sum of the contents of all cells under x will yield the probability of x occurring, and the sum of all cells under y, the probability of y occurring. The sum of all cells will total to 1. From the map, we may also determine the probability that x or y will occur as the sum of the content of the cells represented by the logical (x+y) expression. We may also determine the probability that either x or y (but not both) have occurred as the sum of the cells representing (x+y) minus the contents of the cell represented by logical (xy), or as the sum of the cells in the logical expressions ((x+y) ( x y )). The concept of independence is represented by a situation where the probabilities in the individual cells for the other variables under x are the same as the probabilities in the respective cells under x. (They are not in the example shown above.) The augmentation of the Karnaugh Map to include a numerical representation in the cells, and the use of arithmetic with respect to these numbers encourage us to change the use '+' for "logical or" and '' for "logical and" to U for "logical or" and I for "logical and." The system in which we have both logical and arithmetical operations over a countably-infinite number of variables is referred to in mathematics as a Borel Field. Typically, we can say that if a variable x is in our set of variables, then so is x . The logical operations, union, intersection and complementation may be applied to all subsets of variables and standard arithmetic operations may be applied to the numerical values that represent the probabilities of occurrence. The following relationships are useful in probability theory, and are all derivable from the concepts just presented. (Some new notation will be presented also.) p(x) will represent the probability of the occurrence of the event x. p( x ) will represent the probability of x not occurring.
229

Appendix III: Relationships between Probability and Switching Theory

p(xy) will represent the probability of x occurring given that the event y has occurred (called conditional probability). p(x U y) is the probability of occurrence of either x or y or both. p(x I y) is the probability of both x and y occurring simultaneously. For any single variable: = 1 - p(x) p( x ) = p(x) p(x U x) = p(x) p(x I x) =0 p(x I x ) =1 p(x U x ) p(x | x) =1 =0 p(x | x )

(1) (2) (3) (4) (5) (6) (7)

230

Appendix III: Relationships between Probability and Switching Theory

For two variables: p(x) p( x ) p(x U y) = p(x I y) + p(x I y ) = p( x I y ) + p( x I y ) = p(x I y) + p(x I y ) +p( x I y) = p(x) + p( x I y ) = p(x) + p(y) - p(x I y) = 1 - p( x I y ) - p( x I y ) - p(x I y ) = 1 - p( x ) - p( x I y ) = 1 - p( x ) - p( y ) + p( x I y ) = 1 - p( x U y ) = p(x U y) - p( x I y ) - p(x I y ) p( x I y ) p( x U y ) p(x | y) p(x I y) p(x) = 1 - p(x I y) = p( x U y ) = 1 - p(x U y) = p( x I y ) = p(x I y) / p(y) = p (x | y) p(y) = p (y | x) p(x) = p(x | y) p(y) + p(x | y ) p( y ) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24)

p(x I y)

For more than two variables: The logical operators, union and intersection, are commutative, associative and distribute over each other. In general, p (any logical expression) = p (every equivalent logical expression) (25)

231

Appendix III: Relationships between Probability and Switching Theory

Of significant importance are the conditional probability relationships. * p(x) = p (x | y * i ) p ( yi )


i

(26)

where the y1* represents all (mutually exclusive) intersections of the yi variables and their complements. (This is equivalent to all the minterm expressions of the yi variables.) Also, in general we have: p(x I yi) (27) p(x | yi) = p(yi) p( x I y ) i Similarly, p(yi | x) = (28) p( x ) (29) so that p(x I yi) = p(x | yi) p(yi) = p (yi | x) p (x)
and or p(x | yi) p(yi | x] p( y i | x ) p ( x ) p( yi ) p(x | y )p(y ) i i = p(x) = (30)

(31)

232

You might also like