You are on page 1of 6

Multimedia Data Allocation for

Heterogeneous Memory Using


Genetic/Parralel
Algorithm in Cloud Computing.

Group No- 37

Group Members :-

Manish Das Mohapatra(IPG_2014054)


B Ravi Chandra(IPG _2014027)
K Kiran Kanth(IPG _2014043)

Supervisor :
Prof. S. Tapaswi
Background:
Late extensions of Internet-of-Things applying cloud
computing have been developing at an incredible rate.
As one of the advancements, heterogeneous distributed cloud
computing has empowered an assortment of cloud-based
framework arrangements, for example, multimedia big data.
Various earlier inquires about have investigated the
optimizations of on-start heterogeneous memories.
Be that as it may, the heterogeneous cloud memories are facing
constraints because of the execution restrictions and cost
concerns brought on by the hardware distributions and
manipulating mechanisms followed.
This paper concentrates on this issue and proposes a novel
approach, Cost-Aware Heterogeneous Cloud Memory Model
(CAHCM), intending to arrangement a superior cloud-based
heterogeneous memory benefit offerings.
The main algorithm supporting CAHCM is Dynamic Data
Allocation Advance (2DA) Algorithm that genetic
programming to decide the information allotments on the
cloud-based memories.
Motivation:
Why this problem is important?
The advancements in the cloud computing has
inspired an assortment of investigations in information retrieval
for big data informatics in recent years. Heterogeneous clouds
are viewed as a principal answer for the execution
advancements inside various working conditions when the data
processing task emerges as a big challenge in the booming big
data.
Presently, cloud-based memories are for the most part sent in a
non-distributive way on the cloud side. This arrangement
causes various impediments, for example, over-burdening
energy, extra communications, and lower performance
resource allocation, which confines the usage of the cloud-
based heterogeneous recollections and its distribution.

Why did you choose this problem?


We pick this issue since we needed to have a novel approach
in taking care of the issue of information allocations in cloud-
based heterogeneous memories, which could be connected in
big data for savvy urban areas. The proposed model CAHCM
was intended to empower to relieve the big data preparing to
remote offices by utilizing cloud-based memories. The primary
algorithm calculation was 2DA algorithm that could yield ideal
solutions at a high rate.
Why should others be interested in this problem?

Contemporary cloud foundation arrangement for the most


part alleviate information handling and capacity to the clouds .
Central Processing Units (CPU) and memories offering
handling services are facilitated by individual cloud sellers. This
sort of organization can meet the processingg and examination
requests for those littler measured data . The nonstop or
intermittently alterable executions of the the big data-oriented
usage have brought about bottlenecks for developing firm
performances.
For instance, a few information processing burdens are firmly
connected with the trends or operations, for example, yearly
bookkeeping and reviewing . In this way, an adaptable
approach meeting dynamic use switches has turned into a dire
prerequisite in superior interactive multimedia big data.
Problem Statement:

Cost Optimization Problem on Heterogeneous


Memories (COPHM): Given the underlying status of
information

also, the cloud-based heterogeneous recollections' abilities,


including the quantity of information, the quantity of Read and
Write accesses, the quantity of memories and availabilities, the
expenses of Read and Write for every memory.
The pointed issue is to discover the information distribution
arrangement ideally limiting the aggregate cost.

The primary focus is the memory allocation of all the requests


irrespective of the quantity of data thus developing a parallel
based algorithm to satisfy all the requests.

Final output is decided from the graph plotted : The proposed


algorithm V/S The optimized algorithm.
The yield is an information portion plot in light of the
accessible cloud memories provisioning the ideal answer for
limiting aggregate expenses.
Timeline:

Number of days
Paper
Work

17th-august
Literature Data Implemen
Review collection tation

25th-July

Number of days

16th-June

15th-May

0 10 20 30 40 50

You might also like