You are on page 1of 3

Parallel Processor organisation

Introduction
Parallel processing is a type of computation execution in which many figuring or the execution
of processes are carried out concurrently. Hefty problems can often be divided into minor ones,
which can then be explained at the same time. There are some dissimilar methods of parallel
computing: bit-level, instruction-level, data, & task parallelism[1]. Parallel computers can be
approximately classified according to the level at which the hardware wires parallelism, multi-
core and multi-processor computers having multiple processing elements inside a single
machine, clusters, MPPs, and grids use multiple computers to process on the same task. Specific
parallel computer architectures are occasionally used together with traditional processors, for
accelerating unambiguous tasks.
Need of parallelism
Parallelism has been working for many years, mostly in high-performance computing, but
concentration in it has grown recently due to the physical constraints preventing regularity
scaling (encyclopedia, 2016). As power consumption by computers has become a concern in
current years, parallel processing has become the leading paradigm in computer architecture[2],
mostly in the form of multi-core processors. (Rupali, 2106) In parallel computing, a
computational task is characteristically broken down in numerous, often many, very parallel
subtasks that can be processed independently and whose results are combined subsequently,
leading completion.
In some cases parallelism is clear to the programmer, such as in bit-level or instruction-level
parallelism, but openly parallel algorithms, particularly those that use concurrency, are more hard
to write than serial ones, because concurrency introduces several new classes of potential
software bugs, of which race conditions are the most frequent. Communication and
synchronization between the different subtasks are characteristically some of the greatest
hindrance to getting good parallel program performance.
Formerly, the computer has been view as a in order machine. Most computer programming
languages require the programmer to specify algorithms as series of instruction.

Processor executes programs by execute machine instructions in a chain and one at a time.

Each instruction is executed in a chain of. It is observed that at the micro operation level,
multiple control signals are generated at the similar time [3]. Instruction pipelining, as a
minimum to the extent of overlapping fetch and execute operations, has been around for lengthy
time. By looking into these experience researcher has look into the matter whether several
operation can be performed in parallel or not. As computer technology has evolved, and as the
cost of computer hardware has drop, computer designers have required more and more
opportunities for parallelism, usual to enhance performance and, in some cases, to enlarge
accessibility.

Taxonomy of parallel processor

The taxonomy first introduced by Flynn is still the most common way of classifying programs
and computers .Michael J. Flynn created one of the initial classification system for parallel (and
sequential) computers & programs, now known as Flynn's taxonomy. Flynn classify programs
and computers by whether they were operational using a single set or multiple sets of
instructions, and whether or not those instructions were using a particular set or multiple sets of
data.
The single-instruction-single-data (SISD) classification is equivalent to totally sequential
program. The single-instruction-multiple-data (SIMD) classification is corresponding to doing
the same operation repeatedly over a large data set. This is generally done in signal processing
application. Multiple-instruction-single-data (MISD) is a on the odd occasion used classification.
While computer architectures to treaty with this were devised (such as systolic arrays), few
applications that fit this group materialized. Multiple-instruction-multiple-data (MIMD)
programs are by outlying the most common type of parallel programs.
Single Instruction, Multiple Data (SIMD) system: A single machine instruction controls
the simultaneous execution of a amount of processing elements on a lockstep basis[4].
All processing element has an associated data memory, so that each instruction is
executed on a dissimilar set of data by the different processors. Vector and array
processors fall into SIMD category

Multiple Instructions, Single Data (MISD) : A chain of data is transmitted to a set of


processors, each of which executes a dissimilar instruction sequence. This structure has
never been implemented yet.

Multiple Instruction, Multiple Data (MIMD) : The set of processors concurrently execute
different instruction series on different data sets. SMPs, clusters, and NUMA systems
fits into this category (Stallings, 2010)
With the MIMD organization, the processors are common purpose; each is able to process all of
the instructions necessary to perform the appropriate data transformation.
Additionally MIMD can be subdivided into two main categories:
Symmetric multiprocessor (SMP): In an SMP, multiple processors contribute to a single
memory or a pool of memory by means of a shared bus or other interconnection
mechanism. A differentiate feature is that the memory access time to any region of
memory is just about the same for each processor. Non-uniform memory access
(NUMA): The memory access time to different regions of memory may be at variance
for a NUMA processor.
The invent issues relating to SMPs and NUMA are complex, involving issues recounting
to physical organization, interconnection structures, inter processor communication,
operating system design, and application software techniques.

The organization of multiprocessor system


CONCLUSION:
Parallel processing is the modern way of computerizing data and instructions. Its main function is on computing multiple instructions
at same time.
It has been successfully utilized in many fields of science and technology. Through this paper we present some basic parallel
computing factors followed by serial and parallel computing differences algorithm taxonomy.
The review material explicitly focuses on need of parallelism, theoretical differences of serial and parallel computing and major
influencing factors of parallel processing.
KEYWORDS:
Parallel processing, serial processing, Flynns taxonomy, parallel processing

REFERENCES

Works Cited
encyclopedia. (2016, october 10). www.wikipedic.com. Retrieved sepetember 15, 2016,
from www.wikipedic.com: https://en.wikipedia.org/wiki/Parallel_computing

Rupali. (2106). Investigation into Gang scheduling by integration of Cache in Multi core processors. IJMEIT , 1732.

Stallings, W. (2010). Computer Organization and Architecture :Designing for Perfromance. Prentice Hall: Pearson.

[1] : https://en.wikipedia.org/wiki/Parallel_computing
[2] : https://en.wikipedia.org/wiki/Parallel_computing
[3] : http://www.wbuthelp.com/chapter_file/4055.pdf
[4] : http://computingtutorial.com/types-parallel-processor-systems/

You might also like