You are on page 1of 3

Complex instruction set computing

From Wikipedia, the free encyclopedia

Jump to: navigation, search This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2007)

Complex instruction set computing, or CISC (pronounced /ssk/), is a computer instruction set architecture (ISA) in which each instruction can execute several low-level operations, such as a load from memory, an arithmetic operation, and a memory store, all in a single instruction. A computer based on such architecture is a complex instruction set computer (also CISC). The terms were retroactively coined in contrast to reduced instruction set computing (RISC). Examples of CISC processor families are System/360, PDP-11, VAX, 68000, and x86.

Contents
[hide]

1 Historical design context o 1.1 Incitements and benefits 1.1.1 Performance o 1.2 Potential problems 1.2.1 In the 21st Century 1.2.2 CISC and RISC 2 See also 3 External links 4 References

[edit] Historical design context


[edit] Incitements and benefits
Before the RISC philosophy became prominent, many computer architects tried to bridge the so called semantic gap, i.e. to design instruction sets that directly supported high-level programming constructs such as procedure calls, loop control, and complex addressing modes, allowing data structure and array accesses to be combined into single instructions. Instructions are also typically highly encoded in order to further enhance the code density. The compact nature of such instruction sets results in smaller program sizes and fewer (slow) main memory accesses, which at the time (early 1960s and onwards) resulted in a tremendous savings on the

cost of computer memory and disc storage, as well as faster execution. It also meant good programming productivity even in assembly language, as high level languages such as Fortran or Algol were not always available or appropriate (microprocessors in this category are sometimes still programmed in assembly language for certain types of critical applications). [edit] Performance In the 70's, analysis of high level languages indicated some complex machine language implementations and it was determined that new instructions could improve performance. Some instructions were added that were never intended to be used in assembly language but fit well with compiled high level languages. Compilers were updated to take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can be seen in modern processors as well, particularly in the high performance segment where caches are a central component (as opposed to most embedded systems). This is because these fast, but complex and expensive, memories are inherently limited in size, making compact code beneficial. Of course, the fundamental reason they are needed is that main memories (i.e. dynamic RAM today) remain slow compared to a (high performance) CPU-core.

[edit] Potential problems


While many designs achieved the aim of higher throughput at lower cost and also allowed highlevel language constructs to be expressed by fewer instructions, it was observed that this was not always the case. For instance, low-end versions of complex architectures (i.e. using less hardware) could lead to situations where it was possible to improve performance by not using a complex instruction (such as a procedure call or enter instruction), but instead using a sequence of simpler instructions. One reason for this was that architects (microcode writers) sometimes "over-designed" assembler language instructions, i.e. including features which were not possible to implement efficiently on the basic hardware available. This could, for instance, be "side effects" (above conventional flags), such as the setting of a register or memory location that were perhaps seldom used; if this were done via ordinary (non duplicated) internal buses, or even the external bus, it would demand extra cycles every time, and thus be quite inefficient. Even in balanced high performance designs, highly encoded and (relatively) high-level instructions could be complicated to decode and execute efficiently within a limited transistor budget. Such architectures therefore require a great deal of work on the part of the processor designer in cases where a simpler, but (normally) slower, solution based on decode tables and/or microcode sequencing is not appropriate. At the time where transistors were a limited resource, this also left less room on the processor to optimize performance in other ways, which gave room for ideas to return to simpler processor-designs in order to make it feasible to cope without ROMs (or even PLAs) for sequencing and/or decoding. This led to the first RISC-labeled processors in the mid-1970s (IBM 801 - IBMs Watson Research Center).

The variable length and complex encoding of some of the typical CISC architecture instructions also have a negative impact on superscalar design. This limit the instruction level parallelism that could be extract for the code. [edit] In the 21st Century Transistors for logic, PLAs, and microcode are no longer scarce resources (with the possible exception of high-speed cache memory). The transistor budged of the decoder did not grow exponentially like the total number of transistors per processor. Together with better tools and new technologies, this has led to new implementations of highly encoded and variable length designs without load-store limitations (i.e. non-RISC). This governs re-implementations of older architectures such as the ubiquitous x86 (see below) as well as new designs for microcontrollers for embedded systems, and similar uses. The super scalar problem was solved in the case of x86 with micro oops in the case of intel implementation. This allow a simple superscalar design to be located after the decoder giving, so to speak, the best of both worlds. Also the age of fine /hand tuned code where over and the promise of high level instruction parallelism was never realized. [edit] CISC and RISC The terms CISC and RISC have become less meaningful with the continued evolution of both CISC and RISC designs and implementations. The first highly (or tightly) pipelined x86 implementations, the 486 designs from Intel, AMD, Cyrix, and IBM, supported every instruction that their predecessors did, but achieved maximum efficiency only on a fairly simple x86 subset that resembled only a little more than a typical RISC instruction set (i.e. without typical RISC load-store limitations). The Intel P5 Pentium generation was a superscalar version of these principles. However, modern x86 processors also (typically) decode and split instructions into dynamic sequences of internal buffered micro-operations, which not only helps execute a larger subset of instructions in a pipelined (overlapping) fashion, but also facilitates more advanced extraction of parallelism out of the code stream, for even higher performance.

You might also like