Professional Documents
Culture Documents
Jump to: navigation, search This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2007)
Complex instruction set computing, or CISC (pronounced /ssk/), is a computer instruction set architecture (ISA) in which each instruction can execute several low-level operations, such as a load from memory, an arithmetic operation, and a memory store, all in a single instruction. A computer based on such architecture is a complex instruction set computer (also CISC). The terms were retroactively coined in contrast to reduced instruction set computing (RISC). Examples of CISC processor families are System/360, PDP-11, VAX, 68000, and x86.
Contents
[hide]
1 Historical design context o 1.1 Incitements and benefits 1.1.1 Performance o 1.2 Potential problems 1.2.1 In the 21st Century 1.2.2 CISC and RISC 2 See also 3 External links 4 References
cost of computer memory and disc storage, as well as faster execution. It also meant good programming productivity even in assembly language, as high level languages such as Fortran or Algol were not always available or appropriate (microprocessors in this category are sometimes still programmed in assembly language for certain types of critical applications). [edit] Performance In the 70's, analysis of high level languages indicated some complex machine language implementations and it was determined that new instructions could improve performance. Some instructions were added that were never intended to be used in assembly language but fit well with compiled high level languages. Compilers were updated to take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can be seen in modern processors as well, particularly in the high performance segment where caches are a central component (as opposed to most embedded systems). This is because these fast, but complex and expensive, memories are inherently limited in size, making compact code beneficial. Of course, the fundamental reason they are needed is that main memories (i.e. dynamic RAM today) remain slow compared to a (high performance) CPU-core.
The variable length and complex encoding of some of the typical CISC architecture instructions also have a negative impact on superscalar design. This limit the instruction level parallelism that could be extract for the code. [edit] In the 21st Century Transistors for logic, PLAs, and microcode are no longer scarce resources (with the possible exception of high-speed cache memory). The transistor budged of the decoder did not grow exponentially like the total number of transistors per processor. Together with better tools and new technologies, this has led to new implementations of highly encoded and variable length designs without load-store limitations (i.e. non-RISC). This governs re-implementations of older architectures such as the ubiquitous x86 (see below) as well as new designs for microcontrollers for embedded systems, and similar uses. The super scalar problem was solved in the case of x86 with micro oops in the case of intel implementation. This allow a simple superscalar design to be located after the decoder giving, so to speak, the best of both worlds. Also the age of fine /hand tuned code where over and the promise of high level instruction parallelism was never realized. [edit] CISC and RISC The terms CISC and RISC have become less meaningful with the continued evolution of both CISC and RISC designs and implementations. The first highly (or tightly) pipelined x86 implementations, the 486 designs from Intel, AMD, Cyrix, and IBM, supported every instruction that their predecessors did, but achieved maximum efficiency only on a fairly simple x86 subset that resembled only a little more than a typical RISC instruction set (i.e. without typical RISC load-store limitations). The Intel P5 Pentium generation was a superscalar version of these principles. However, modern x86 processors also (typically) decode and split instructions into dynamic sequences of internal buffered micro-operations, which not only helps execute a larger subset of instructions in a pipelined (overlapping) fashion, but also facilitates more advanced extraction of parallelism out of the code stream, for even higher performance.