You are on page 1of 32

A Presentation on

Parallel Computing

-Ameya Waghmare(Rno 41,BE CSE)


Guided by-Dr.R.P.Adgaonkar(HOD),CSE Dept.
Parallel computing is a form of computation
in which many instructions are carried out
simultaneously operating on the principle
that large problems can often be divided
into smaller ones, which are then solved
concurrently (in parallel).

Why is it required?
With the increased use of computers
in every sphere of human
activity,computer scientists are faced
with two crucial issues today.

Processing has to be done faster like


never before
Larger or complex computation
problems need to be solved
Increasing the number of transistors
as per Moores Law isnt a solution,as
it also increases the frequency
scaling and power consumption.
Power consumption has been a
major issue recently,as it causes a
problem of processor heating.
The perfect solution is PARALLELISM
In hardware as well as software.
Difference With Distributed
Computing
When different processors/computers work
on a single common goal,it is parallel
computing.
Eg.Ten men pulling a rope to lift up one
rock,supercomputers implement parallel
computing.
Distributed computing is where several
different computers work separately on a
multi-faceted computing workload.
Eg.Ten men pulling ten ropes to lift ten
different rocks,employees working in an
office doing their own work.
Difference With Cluster
Computing
A computer cluster is a group of linked
computers, working together closely so that in
many respects they form a single computer.
Eg.,In an office of 50 employees,group of 15
doing some work,25 some other,and remaining
10 something else.
Similarly,in a network of 20 computers,16
working on a common goal,whereas 4 on some
other common goal.
Cluster Computing is a specific case of parallel
computing.
Difference With Grid
Computing
Grid Computing makes use of computers
communicating over the Internet to work on a
given problem.
Eg.When 3 persons,one of them from
USA,another from Japan and a third from
Norway are working together online on a
common project.
Websites like Wikipedia,Yahoo!
Answers,YouTube,FlickR or open source OS like
Linux are examples of grid computing.
Again,it serves a san example of parallel
computing.
The Concept Of Pipelining
In computing, a pipeline is a set of
data processing elements connected
in series, so that the output of one
element is the input of the next one.
The elements of a pipeline are often
executed in parallel or in time-sliced
fashion; in that case, some amount
of buffer storage is often inserted
between elements.
Approaches To Parallel
Computing
Flynns Taxonomy

SISD(Single Instruction Single Data)


SIMD(Single Instruction Multiple Data)
MISD(Multiple Instruction Single Data)
MIMD(Multiple Instruction Multiple
Data)
Approaches Based On
Computation

Massively Parallel
Embarrassingly Parallel
Grand Challenge Problems
Massively Parallel Systems
It signifies the presence of many
independent units or entire
microprocessors, that run in parallel.

The term massive connotes


hundreds if not thousands of such
units.
Example:the Earth
Simulator(Supercomputer from
2002-2004)
Embarrassingly Parallel
Systems
An embarrassingly parallel system is one
for which no particular effort is needed to
segment the problem into a very large
number of parallel tasks.
Examples include surfing two websites
simultaneously , or running two
applications on a home computer.
They lie to an end of spectrum of
parallelisation where tasks can be readily
parallelised.
Grand Challenge Problems
A grand challenge is a fundamental
problem in science or engineering, with
broad applications, whose solution would
be enabled by the application of high
performance computing resources that
could become available in the near future.
Grand Challenges were USA policy
terms set as goals in the late 1980s for
funding high-performance computing and
communications research in part in
response to the Japanese 5th Generation
(or Next Generation) 10-year project.
Types Of Parallelism

Bit-Level
Instructional
Data
Task
Bit-Level Parallelism
When an 8-bit processor needs to add
two 16-bit integers,its to be done in
two steps.
The processor must first add the 8
lower-order bits from each integer
using the standard addition instruction,
Then add the 8 higher-order bits using
an add-with-carry instruction and the
carry bit from the lower order addition
Instruction Level Parallelism
The instructions given to a computer
for processing can be divided into
groups, or re-ordered and then
processed without changing the final
result.
This is known as instruction-level
parallelism.
i.e.,ILP.
An Example
1. e = a + b
2. f = c + d
3. g = e * f
Here, instruction 3 is dependent on
instruction 1 and 2 .
However,instruction 1 and 2 can be
independently processed.
Data Parallelism

Data parallelism focuses on


distributing the data across different
parallel computing nodes.

It is also called as loop-level


parallelism.
An Illustration
In a data parallel implementation, CPU A
could add all elements from the top half of
the matrices, while CPU B could add all
elements from the bottom half of the
matrices.
Since the two processors work in parallel,
the job of performing matrix addition
would take one half the time of performing
the same operation in serial using one
CPU alone.
Task Parallelism
Task Parallelism focuses on
distribution of tasks across different
processors.

It is also known as functional


parallelism or control parallelism
An Example
As a simple example, if we are
running code on a 2-processor
system (CPUs "a" & "b") in a parallel
environment and we wish to do tasks
"A" and "B" , it is possible to tell CPU
"a" to do task "A" and CPU "b" to do
task 'B" simultaneously, thereby
reducing the runtime of the
execution.
Key Difference Between Data
And Task Parallelism
Data Parallelism Task Parallelism
It is the division of It is the divisions
threads(processes) among
or instructions or threads(processes) or
tasks internally into instructions or tasks
sub-parts for themselves for
execution. execution.
A task A is
divided into sub- A task A and task
parts and then B are processed
processed. separately by
different processors.
Implementation Of Parallel
Computing
Comput In Software
When implemented in software(or
rather algorithms), the terminology
calls it parallel programming.

An algorithm is split into pieces and


then executed, as seen earlier.
Important Points In Parallel
Programming
Dependencies-A typical scenario when
line 6 of an algorithm is dependent
on lines 2,3,4 and 5
Application Checkpoints-Just like
saving the algorithm, or like creating a
backup point.
Automatic Parallelisation-Identifying
dependencies and parallelising
algorithms automatically.This has
achieved limited success.
Implementation Of Parallel
Computing In Hardware
When implemented in hardware, it is
called as parallel processing.

Typically,when a chunk of load for


execution is divided for processing by
units like cores,processors,CPUs,etc.
An Example:Intel Xeon Series
Processors
References
http://portal.acm.org/citation.cfm?
id=290768&coll=portal&dl=ACM
http://www-users.cs.umn.edu/~karypis/parbook/
www.cs.berkeley.edu/~yelick/cs267-
sp04/lectures/01/lect01-intro
www.cs.berkeley.edu/~demmel/cs267_Spr99/Le
ctures/Lect_01_1999b
http://www.intel.com/technology/computing/dua
l-core/demo/popup/dualcore.swf
www.parallel.ru/ftp/computers/intel/xeon/24896
607.pdf
www.intel.com
Thank You!
ANY QUERIES?

You might also like