You are on page 1of 5

Chapter 01 Structure 1.0 Introduction 1.1 Evolution and Interpretation of concept of computer architecture 1.2 Parallel/Vector Computers 1.

3 Development Layers 1.4 New Challenges 1.5 Concrete architectures of computer system 1.6 Abstract architecture of processors 1.7 Summary 1.8 Exercise 1.0 Introduction Computers have gone through two major stages of development mechanical and electrical. Parallelism in architecture is one such change. It appears in various forms. Such as lookahead, pipelining, rectorization, concurrency, distributed computing at different processing lovels. The evolution of all these techniques are briefly explained in this unit. 1.1 Evolution and Interpretation of concept of Computer Architecture The study of computer architecture involves both hardware organization and programming/software requirements. As seen by an assembly language programmer, computer architecture is abstracted by its instruction set, which includes opcode (operation codes), addressing modes, registers, virtual memory, etc. From the hardware implementation point of view, the abstract machine is organized with CPUs, caches, buses, microcode, pipelines, physical memory, etc. Therefore, the study of architecture covers both instruction-set architectures and machine implementation organizations. Over the past four decades, computer architecture has gone through evolutional rather than revolutional changes. Sustaining features are those that were proven performance deliverers. As depicted in Fig. 1.1, we started with the von Neumann architecture built as a sequential machine executing scalar data. The sequential computer was improved from bit-serial to word-parallel operations, and from fixed-point to floating- point operations. The von Neumann architecture is slow due to sequential execution of instructions in programs.

Figure 1.1 Tree showing architectural evolution from sequential scalar computers to vector processors and parallel computers Evolution and interpretation of the concept computer architecture, Although the concept of computer architecture is unquestionably one of the basic concepts in Informatics, at present there is no general agreement about its definition or interpretation. In the following, we first describe how this important concept has co evolved. Then, we state our definition and interpretation. Evolution of the concept As far as the evolution of the concept of computer architecture is concerned, there are four major steps to be emphasized, which will be overviewed as follows. The term computer architecture was coined in 1964 by the chief architects of the IBM System/360 (Amdahl et al., 1964) in a paper announcing the most successful family of computers ever built. They interpreted computer architecture as the structure of a computer that a ma chine language programmer must understand to write a correct (timing independent) program for a machine. Essentially, their interpretation comprises the definition of registers and memory as well as of the instruction set, instruction formats, addressing modes and the actual coding of the instructions excluding implementation and realization. By implementation they understood the actual hardware structure and by realization the logic technology, packaging and interconnections. An important contribution to the concept of computer architecture by introducing a hierarchical, multilevel description. They identified fourlevels that can be used for describing a computer. These

are the electronic circuit level, the logic design level, the programming level and theprocessormemory-switch (PMS) level. The third level refers to the concept of architecture mentioned above. The fourth level is a top-level description of a computer system based on the specification of the basic units like the processor, memory, and so on, as well as their interconnections. The next step in refining the concept of computer architecture was to extend the concept of architecture equally to both the functional specification and the hardware implementation Look ahead, Parallelism, and Pipelining Look ahead techniques were introduced to prefetch instructions in order to overlap l/E (instruction fetch/decode and execution) operations and to enable functional parallelism. Functional parallelism was supported by two approaches: One is to use multiple functional units simultaneously, and the other is to practice pipelining at various processing levels. The latter includes pipelined instruction execution, pipelined arithmetic computations, and memoryaccess operations. Pipelining has proven especially attractive in performing identical operations repeatedly over vector data strings. Vector operations were originally carried out implicitly by software-controlled looping using scalar pipeline processors. A Classification that introduced of various computer architectures based on notions of instruction and data streams. As illustrated in Fig. 1.2a, conventional sequential machines are called SISD (single instruction stream over a single data stream) computers. Vector computers are equipped with scalar and vector hardware or appear as SIMD (single instruction stream over multiple data streams) machines (Fig. 1.2b). Parallel computers are reserved for MIMD (multiple instruction streams over multiple data streams) machines. An MISD (multiple instruction streams and a single data stream) machines are modeled in Fig. 1.2. The same data stream flows through a linear array of processors executing different instruction streams. This architecture is also known as systolic arrays for pipelined execution of specific algorithms. Of the four machine models, most parallel computers built in the past assumed the MIMD model for general-purpose computations. The SIMD and MISD models are more suitable for special-purpose computations. For this reason, MIMD is the most popular model, SIMD next, and MISD the least popular model being applied in commercial machines. 1.2 Parallel/Vector Computers Implicit parallel computers are those that execute programs in MIMD mode. There are two major classes of parallel computers, namely, shared-memory multiprocessors and message-passing multicomputers. The major distinction between multiprocessors and multicomputers lies in memory sharing and the mechanisms used for interprocessor communication. The processors in a multiprocessor system communicate with each other through shared variables in a common memory. Each computer node in a multicomputer system has a local memory, unshared with other nodes. Interprocessor communication is done through message passing among the nodes. Explicit vector instructions were introduced with the appearance of vector processors. A vector processor is equipped with multiple vector pipelines that can be con- currently used under hardware or firmware control. There are two families of pipelined vector processors: Memory-to-memory architecture supports the pipelined flow of vector operands directly from the memory to pipelines and then back to the memory. Register-to- register architecture uses vector registers to interface between the memory and functional pipelines. Another important branch of the architecture tree. consists of the SIMD computers for synchronized vector processing. An SIMD computer exploits spatial parallelism rather than temporal

parallelism as in a pipelined computer. SIMD computing is achieved through the use of an array of processing elements (PEs) synchronized by the same controller. Associative memory can be used to build SIMD associative processors.

1.3 Development Layers A layered development of parallel computers is based on a recent classification differ from machine to machine, even those of the same model. The address space of a processor in a computer system varies among different architectures. It depends on the memory organization, which is machine-dependent. These features are up to the designer and should match the target application domains. On the other hand, we want to develop application programs and programming environments which are machine-independent. Independent of machine architecture, the user programs can be ported to many computers with minimum conversion costs. High-level languages and communication models depend on the architectural choices made in a computer system. From a programmers viewpoint, these two layers should be architecture -transparent. However, the communication models, shared variables versus message passing, are mostly machine-dependent. Application programmers prefer more architectural transparency. However, kernel programmers have to explore the opportunities supported by hardware. As a good computer architect, one has to approach the problem from both ends. The compilers and OS support should be designed to remove as many architectural constraints as possible from the programmer.

1.4 New Challenges The technology of parallel processing is the outgrowth of four decades of research and industrial advances in microelectronics, printed circuits, high- density packaging, advanced processors, memory systems, peripheral devices, communication channels, language evolution, compiler sophistication, operating systems, programming environments, and application challenges. The rapid progress made in hardware technology has significantly increased the economical feasibility of building a new generation of computers adopting parallel processing. However, the major barrier preventing parallel processing from entering the production mainstream is on the software and application side. To date, it is still very difficult and painful to program parallel and vector computers. We need to strive for major progress in the software area in order to create a user-friendly environment for high-power computers. A whole new generation of programmers need to be trained to program parallelism effectively. High-performance computers provide fast and accurate solutions to scientific, engineering, business, social, and defense problems. Representative real-life problems include weather forecast modeling, computer- aided design of VLSI circuits, large-scale database management, artificial intelligence, crime control, and strategic defense initiatives, just to name a few. The application domains of parallel processing computers are expanding steadily. With a good understanding of scalable computer architectures and mastery of parallel programming techniques, the reader will be better prepared to face future computing challenges. 1.5 Concrete architectures of computer systems At the system level the description of the concrete architecture is based on processor level building blocks, such as processors, memories, buses and so on. Its description includes the specification of the building blocks, of the interconnections among them and of the operation of the whole system. 1.6 Abstract architecture of processors (architecture) The abstract architecture of a processor is often referred to as simply the architecture of the processor. At the processor-level abstract architecture, we are concerned with either the programming model or the hardware model of a particular processor. 1.5 Summary The study of computer architecture involves both hardware organization and programming software requirements. Flynns classification shows the architectural evolution from sequential scalar computers to vector processors and parallel computers. 1.6 Exercise 1) Explain Flynns classification. 2) Write a short note on evolution of computer architecture. 3) What are parallel vector processor ?

You might also like