You are on page 1of 56

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

INDEX
S.N 1. 2. ABSTRACT INTRODUCTION 2.1 Company Profile 2.2 Project Overview LITERATURE SURVEY SYSTEM REQUIREMENTS 4.1 Application 4.2 Hardware 5. 6. SDLC MODEL FOR THE PROJECT SYSTEM REQUIREMENT SPECIFICATION (SRS) 6.1 Introduction 6.2 Purpose Of The System 6.3 System Analysis 6.4.1 Existing System 6.4.2 Proposed System SYSTEM DESIGN LOGICAL DESIGN 8.1 Data Flow Diagram (DFD) 8.2 E-R Diagram 8.3 Use case diagram 8.4 Class diagram 8.5 Sequence Diagram 8.6 Activity Diagram DATABASE DESIGN 9.1 Table Design/ IO Design 10. 11. IMPLEMENTATIONS SYSTEM TESTING 12.1 Types Of Testing 12.2 Testing Strategies REPORTS/OUTPUT SCREENS CONCLUSION FUTURE ENHANCEMENTS BIBILIOGRAPHY CONTENT PAGE NO.

3. 4.

7. 8.

9.

12. 13. 14. 15.

DEPARTMENT OF MCA

Page 1

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

1. ABSTRACT
Infrastructure as a Service (IaaS) cloud computing has transform the way we think of acquiring resources by introducing a simple change: allowing users to lease computational resources from the cloud providers datacenter for a short time by deploying virtual machines (VMs) on these resources. This new model raises new challenges in the design and development of IaaS middleware. One of those challenges is the need to deploy a large number (hundreds or even thousands) of VM instances simultaneously. Once the VM instances are deployed, another challenge is to simultaneously take a snapshot of many images and transfer them to persistent storage to support management tasks, such as suspend-resume and migration. With datacenters growing rapidly and configurations becoming heterogeneous, it is important to enable efficient concurrent deployment and snapshotting that are at the same time hypervisor independent and ensure a maximum compatibility with different configurations. This paper addresses these challenges by proposing a virtual file system specifically optimized for virtual machine image storage. It is based on a lazy transfer scheme coupled with object versioning that handles snapshotting transparently in a hypervisor-independent fashion, ensuring high portability for different configurations.

DEPARTMENT OF MCA

Page 2

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

2. INTRODUCTION
2.1 Company Profile

SOFTWARE SOLUTIONS JDK Technologies, is an IT Services company, based at Bangalore India, Providing services in software consulting, application development, outsourcing services, Recruitment and Training. Started operation in the year 2011. JDK Technologies Software Solutions is an IT solution provider for a dynamic environment where business and technology strategies converge. Their approach focuses on new ways of business combining IT innovation and adoption while also leveraging an organizations current IT assets. Their work with large global corporations and new products or services and to implement prudent business and technology strategies in todays environment.
JDK TECHNOLOGIES RANGE OF EXPERTISE INCLUDES:

Software Development Services Engineering Services Systems Integration Customer Relationship Management Product Development Consulting IT Outsourcing

We apply technology with innovation and responsibility to achieve two broad objectives:
DEPARTMENT OF MCA Page 3

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

Effectively address the business issues our customers face today. Generate new opportunities that will help them stay ahead in the future.

THIS APPROACH RESTS ON: A strategy where we architect, integrate and manage technology services and solutions - we call it AIM for success. A robust offshore development methodology and reduced demand on customer resources. A focus on the use of reusable frameworks to provide cost and times benefits.

They combine the best people, processes and technology to achieve excellent results consistency. We offer customers the advantages of: SPEED: They understand the importance of timing, of getting there before the competition. A rich portfolio of reusable, modular frameworks helps jump-start projects. Tried and tested methodology ensures that we follow a predictable, low - risk path to achieve results. Our track record is testimony to complex projects delivered within and evens before schedule. EXPERTISE: Our teams combine cutting edge technology skills with rich domain expertise. Whats equally important - they share a strong customer orientation that means they actually start by listening to the customer. Theyre focused on coming up with solutions that serve customer requirements today and anticipate future needs.

DEPARTMENT OF MCA

Page 4

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

A FULL SERVICE PORTFOLIO: They offer customers the advantage of being able to Architect, integrate and manage technology services. This means that they can rely on one, fully accountable source instead of trying to integrate disparate multi vendor solutions. SERVICES: JDK Technologies is providing its services to companies which are in the field of production, quality control etc With their rich expertise and experience and information technology they are in best position to provide software solutions to distinct business requirements.

DEPARTMENT OF MCA

Page 5

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

2.2 INTRODUCTION

The Infrastructure as a Service cloud computing has emerged as a viable alternative to the acquisition and management of physical resources. With IaaS, users can lease storage and computation time from large datacenters. Leasing of computation time is accomplished by allowing users to deploy virtual machines (VMs) on the datacenters resources. Since the user has complete control over the configuration of the VMs using on-demand deployments, IaaS leasing is equivalent to purchasing dedicated hardware but without the long-term commitment and cost. The on-demand nature of IaaS is critical to making such leases attractive, since it enables users to expand or shrink their resources according to their computational needs, by using external resources to complement their local resource base.

This problem is particularly acute for VM images used in scientific computing where image sizes are large. A typical deployment consists of hundreds or even thousands of such images. Conventional deployment techniques broadcast the images to the nodes before starting the VM instances, a process that can take tens of minutes to hours, not counting the time to boot the operating system itself. We rely on four key principles: aggregate the storage space, optimize VM disk access, reduce contention, and optimize multi snapshotting.

DEPARTMENT OF MCA

Page 6

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

Aggregate the storage space locally available on the compute nodes

In most cloud deployments the disks locally attached to the compute nodes are not exploited to their full potential .Most of the time, such disks are used to hold local copies of the images corresponding to the running VMs, as well as to provide temporary storage for them during their execution, which utilizes only a small fraction of the total disk size. to aggregate the storage space from the compute nodes in a shared common pool that is managed in a distributed fashion, on top of which we build our virtual file system. This approach has two key advantages. First ,it has a potential for high scalability, as a growing number of compute nodes automatically leads to a larger VM image repository, which is not the case if the repository is hosted by dedicated machines. Second, it frees a large amount of storage space and overhead related to VM management on dedicated storage nodes, which can improve performance and/or quality-of-service guarantees for specialized storage services that the applications running inside the VMs require and are often offered by the cloud provider (e.g., database engines ,distributed hash tables, special purpose file systems, etc.) An important issue in this context is to be able to lever- age the storage space provided by the local disks without interfering with the normal VM execution.

DEPARTMENT OF MCA

Page 7

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

Optimize VM disk access by using on-demand image mirroring When a new VM needs to be instantiated, the underlying VM image is presented to the hypervisor as a regularfile accessible from the local disk. Read and write accessesto the file, however, are trapped and treated in a specialfashion. A read that is issued on a fully or partially empty region in the file that has not been accessed before (by either a previous read or write) results in fetching the missing content remotely from the VM repository, mirroring it on the local disk and redirecting the read to the local copy. the whole region is available locally, no remote read is per-formed. Writes, on the other hand, are always performed locally.

Reduce contention by striping the image Each VM image is split into small, equal-sized chunks that are evenly distributed among the local disks participating in the shared pool. When a read accesses a region of the image that is not available locally, the chunks that hold this region are determined and transferred in parallel from the remote disks that are responsible for storing them. Under concurrency, this scheme effectively enables the distribution of the I/O workload, because accesses to different parts of the image are served by different disks. Even in the worst-case scenario when all VMs read the same chunks in the same order concurrently (for example, during the boot phase), there is a high chance that the accesses get skewed and thus are not issued at exactly the same time. This effect happens for various reasons, such as different hypervisor initialization overhead and inter leaving of CPU time with I/O access (which under concurrency leads to a situation where some VMs execute code during the time in which others issue remote reads). For example, when booting 100 VM instances simultaneously, we measured two random instances to have, on average, a
DEPARTMENT OF MCA Page 8

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

skew of about 100 ms between the times they access the boot sec-tor of the initial image. This skew grows higher the longer the VM instances continue with the boot process. What this means is that at some point under concurrency they will access different chunks, which are potentially stored on different storage nodes, and thus contention is reduced. While splitting the image into chunks reduces contention, the effectiveness of this approach depends on the chunk size and is subject to a trade-off. A chunk that is too large may lead to false sharing; that is, many small concurrent reads on different regions in the image might fall inside the same chunk, which leads to a bottleneck. A chunk that is too small, on the other hand, implies a higher access over-head, both because of higher network overhead, resulting from having to perform small data transfers, and because of\ higher metadata access overhead, resulting from having to manage more chunks.

Optimize multisnapshotting by means of shadowing and cloning. Saving a full VM image for each VM is not feasible in the context of multisnapshotting. Since only small parts of the VMs are modified, this would mean massive unnecessary duplication of data, leading not only to an explosion of utilized storage space but also to an unacceptably high snapshotting time and network bandwidth utilization. For this reason, several custom image file formats were proposed that optimize taking incremental VM image snapshots. This approach enables snapshots to share unmodified content, which lowers storage space requirements. However, it presents several drawbacks.

DEPARTMENT OF MCA

Page 9

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

First, a new snapshot is created by storing incremental differences as a separate file, while leaving the original file corresponding to the initial image untouched and using it as a backing file. When taking snapshots of the same image successively, a chain of files that depend on each other is obtained, which raises a lot of issues related to manageability. For example, one must keep track of dependencies between files. Even when such functionality is implemented, the cloud customer has to download a whole set of files from the cloud in order to get a local copy of a single VM snapshotan operation that makes VM image downloads both costly and complex. Furthermore, in production use, this can lead to significant performance issues: a huge number of files will accumulate over time, thereby introducing a large metadata overhead. Second, a custom image file format limits the migration capabilities. If the destination host where the VM needs to be migrated runs a different hypervisor that does not understand the custom image file format, migration is not possible. Therefore, it is highly desirable to satisfy three requirements simultaneously:

Store only the incremental differences between snap-shots. Consolidate each snapshot as a standalone entity. Present a simple raw image format to the hypervisors to maximize migration portability. a solution that addresses these three requirements by leveraging two features proposed by versioning systems:

DEPARTMENT OF MCA

Page 10

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

Shadowing and cloning: Shadowing means to offer the illusion of creating a new standalone snapshot of the object for each update to it but to physically store only the differences and manipulate metadata in such way that the illusion is upheld. This effectively means that from the users point of view, each snapshot is a first-class object that can be accessed independently. For example, lets assume a small part of a large file needs to be updated. With shadowing, the user sees the effect of the update as a second file that is identical to the original except for the updated part.

Cloning: means to duplicate an object in such way that it looks like a stand-alone copy that can evolve in a different direction from the original but physically shares all initial content with the original. Therefore, we propose to deploy a distributed versioning system that efficiently supports shadowing and cloning, while consolidating the storage space of the local disks into a shared common pool. With this approach, snapshotting can be easily performed in the following fashion. The first time a snapshot is built, for each VM instance a new virtual image clone is created from the initial image. Subsequent local modifications are written as incremental differences to the clones and shadowed.

DEPARTMENT OF MCA

Page 11

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

3 LITERATURE SURVEY
Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy n company strength. Once these things r satisfied, ten next steps is to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration r taken into account for developing the proposed system.

DEPARTMENT OF MCA

Page 12

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

4 SYSTEM REQUIREMENTS
The Application requirement specification is produced at the culmination of the analysis task. The function and performance allocated to Application as part of system engineering are refined by establishing a complete information description as functional representation, a representation of system behavior, an indication of performance requirements and design constraints, appropriate validation criteria.

4.1 Application Requirements

Language Front End Tool Back End Tool Operating System

: : : :

java NetBeans 7.1 MySqlserver 2005 Windows,MAC,Linex

4.2 Hardware Requirements

Hard disk SDRAM Processor Processor Speed

: : : :

60 GB 1 GB Intel Core 2 Duo 2 GHz

DEPARTMENT OF MCA

Page 13

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

5 SDLC MODEL FOR THE PROJECT


The relationship of each stage to the others can be roughly described as a waterfall, where the outputs from a specific stage serve as the initial inputs for the following stage. During each stage, additional information is gathered or developed, combined with the inputs, and used to produce the stage deliverables. It is important to note that the additional information is restricted in scope; new ideas that would take the project in directions not anticipated by the initial set of high-level requirements are not incorporated into the project. Rather, ideas for new capabilities or features that are out-of-scope are preserved for later consideration. After the project is completed, the Primary Developer Representative (PDR) and Primary End-User Representative (PER), in concert with other customer and development team personnel develop a list of recommendations for enhancement of the current Project.

DEPARTMENT OF MCA

Page 14

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

6 SYSTEM REQUIREMENT SPECIFICATION (SRS) 6.1 Introduction


The purpose of this SRS is to specify the requirements of the Application , which is a Content Management of cloud infrastructure . This Application Requirements Specification provides a complete description of all the functions and specifications of modules. This document contains the Application requirements of Efficient Multi deployment and Multisnapshotting on Clouds. The SRS is produced at the culmination of the analysis task. The function and performance allocated to Application as part of the system engineering and refined by establishing a complete information description, a detailed functional description, a representation of system behavior, indication of performance requirements and design constrains, appropriate validation criteria and the other information related to requirements.

6.2 Purpose of the System


The main purpose of our system is to make Multi deployment and Multisnapshotting on Clouds task easy and is to develop Application that replaces the manual Multi deployment and Multisnapshotting into automated Multideployment and Multisnapshotting on Clouds . This document serves as the unambiguous guide for the developers of this Application system.

DEPARTMENT OF MCA

Page 15

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

6.3 EXISTING SYSTEM

The huge computational potential offered by large distributed systems is hindered by poor data sharing scalability. We addressed several major requirements related to these challenges. One such requirement is the need to efficiently cope with massive unstructured data (organized as huge sequences of bytes - BLOBs that can grow to TB) in very large-scale distributed systems while maintaining a very high data throughput for highly concurrent, fine-grain data accesses.

The role of virtualization in Clouds is also emphasized by identifying it as a key component. Moreover, Cloud shave been defined just as virtualized hardware and software plus the previous monitoring and provisioning technologies.

Cloud Computing is a buzz word around a wide variety of aspects such as deployment, load balancing, provisioning, and data and processing outsourcing.

DISADVANTAGE

To give an less performance and storage space. Network traffic consumption also very high due to non concentrating on application status. It is not possible to build a scalable, high-performance distributed data-storage service that facilitates data sharing at large scale.
DEPARTMENT OF MCA Page 16

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

6.4 PROPOSED SYSTEM

A distributed virtual file system specifically optimized for both the multideployment and multi snapshotting patterns. Since the patterns are complementary, we investigate them in conjunction. Our proposal offers a good balance between performance, storage space, and network traffic consumption, while handling snapshotting transparently and exposing standalone, raw image files (understood by most hypervisors) to the outside.

We introduce a series of design principles that optimize multideployment and multisnapshotting patterns and describe how our design can be integrated with IaaS infrastructures. We show how to realize these design principles by building a virtual file system that leverages versioning-based distributed storage services. To illustrate this point,we describe an implementation on top of BlobSeer, a versioning storage service specifically designed for high throughput under concurrency.

ADVANTAGE A good balance between performance, storage space, and network traffic consumption, while handling snapshotting transparently and exposing standalone, raw image files.

DEPARTMENT OF MCA

Page 17

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

7. SYSTEM DESIGN

Systems design is the process or art of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. One could see it as the application of systems theory to product development. There is some overlap and synergy with the disciplines of systems analysis, systems architecture and systems engineering

7.1 MODULES Cloud infrastructure Application state maintenance Application access pattern Aggregate the storage Image mirroring Striping the image Optimize multisnapshotting Zoom on mirroring

DEPARTMENT OF MCA

Page 18

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

7.2 MODULE DESCRIPTION

7.2.1 CLOUD INFRASTRUCTURE IaaS platforms are typically built on top of clusters made out of loosely-coupled commodity hardware that minimizes per unit cost and favors low power over maximum speed .Disk storage (cheap hard-drives with capacities in the order of several hundred GB) is attached to each machine, while the machines are interconnected with standard Ethernet links. The machines are configured with proper virtualization technology, in terms of both hardware and software, such that they are able to host the VMs. In order to provide persistent storage, a dedicated repository is deployed either as centralized or as distributed storage service running on dedicated storage nodes.

7.2.2 APPLICATION STATE MAINTENANCE The VM deployment is defined at each moment in time by two main components: the state of each of the VM instances and the state of the communication channels between them (opened sockets, in-transit network packets, virtual topology, etc.). To saving the application state implies saving both the state of all VM instances and the state of all active communication channels among them. While several methods have been established in the virtualization community to capture the state of a running VM (CPU registers, RAM, state of devices, etc.), the issue of capturing the global state of the communication channels is difficult and still an open problem.

DEPARTMENT OF MCA

Page 19

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

7.2.3 APPLICATION ACCESS PATTERN A VM typically does not access the whole initial image. For example, it may never access some applications and utilities that are installed by default with the operating system. In order to model this aspect, it is useful to analyze the life-cycle of a VM instance, it will based on Three phases. They are boot, application and shutdown.

7.2.4 AGGREGATE THE STORAGE In most cloud deployments, the disks locally attached to the compute nodes are not exploited to their full potential. Most of the time, such disks are used to hold local copies of the images corresponding to the running VMs, as well as to provide temporary storage for them during their execution, which utilizes only a small fraction of the total disk size.

7.2.5 IMAGE MIRRORING Anew VM needs to be instantiated; the underlying VM image is presented to the hypervisor as a regular file accessible from the local disk. Read and write accesses to the file, however, are trapped and treated in a special fashion. A read that is issued on a fully or partially empty region in the file that has not been accessed before (by either a previous read or write) results in fetching the missing content remotely from the VM repository, mirroring it on the local disk and redirecting the read to the local copy. If the whole region is available locally, no remote read is performed. Writes, on the other hand, are always performed locally.
DEPARTMENT OF MCA Page 20

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

7.2.6 STRIPING THE IMAGE Each VM image is split into small, equal-sized chunks that are evenly distributed among the local disks participating in the shared pool. When a read accesses a region of the image that is not available locally, the chunks that hold this region are determined and transferred in parallel from the remote disks that are responsible for storing them. Under concurrency, this scheme effectively enables the distribution of the I/O workload, because accesses to different parts of the image are served by different disks.

7.2.7 OPTIMIZE MULTISNAPSHOTTING Saving a full VM image for each VM is not feasible in the context of multisnapshotting. Since only small parts of the VMs are modified, this would mean massive unnecessary duplication of data, leading not only to an explosion of utilized storage space but also to unacceptably high snapshotting time and network bandwidth utilization.

7.2.8 ZOOM ON MIRRORING One important aspect of on-demand mirroring is the decision of how much to read from the repository when data is unavailable locally, in such way as to obtain a good access performance. A straightforward approach is to translate every read issued by the hypervisor in either a local or remote read, depending on whether the requested content is locally available. While this approach works, its performance is questionable. More specifically, many small remote read requests to the same chunk generate significant network traffic overhead (because of the extra networking information encapsulated with each request), as well as low throughput (because of the latencies of the requests that add up).
DEPARTMENT OF MCA Page 21

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

8. LOGICAL DESIGN
During the logical design phase of the solution life cycle, you design a logical architecture showing the interrelationships of the logical components of the solution. The logical architecture and the usage analysis from the technical requirements phase form a deployment scenario, which is the input to the deployment design phase. The Logical Architecture defines the Processes (the activities and functions) that are required to provide the required User Services. Many different Processes must work together and share information to provide a User Service. The Processes can be implemented via software, hardware, or firmware. The Logical Architecture is independent of technologies and implementations.

The Logical Architecture consists of Processes (defined above), Data Flows, Terminators, and data stores. Data Flows identify the information that is shared by the Processes. The entry and exit points for the Logical Architecture are the sensors, computers, human operators of the ITS systems (called Terminators). These Terminators appear in the Physical Architecture as well. Data stores are repositories of information maintained by the Processes.

DEPARTMENT OF MCA

Page 22

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

The Logical Architecture is presented to the reader via Data Flow Diagrams*(DFDs) or bubble charts and Process Specifications (PSpecs). The DFDs are graphical presentations of the Processes, Terminators, Data Flows, and Data Stores in the architecture. The DFDs are organized hierarchically starting from highest-level activity "Manage ITS". High-level activities are then decomposed functionally through multiple levels to arrive at the fundamental ITS processes and activities. The PSpecs are textual descriptions of the most rudimentary processes in the Logical Architecture. Each PSpec description consist of an overview, a set of functional requirements, and a complete listing of inputs and outputs. A system designer can use these descriptions as a guide to writing the specifications for the systems that will implement the processes described. All of the PSpecs and Subsystem entries are hyperlinked to detailed descriptions in this document The "Processes" link in the figure above presents a list of all of the DFDs and the PSpecs defined in this version of the Architecture. Also included are the Subsystems from the Physical Architecture that utilize the PSpecs..

DEPARTMENT OF MCA

Page 23

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

8.2 Data Flow Diagram


The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system.

User system

REGISTER

GETTING AUTHORIZATION TO STORE RESOURCES

DATACENTER

RESOURCES RESOURCE1, CONTROL API HYPERVISOR Request Requesting files LOCAL DISK CENTRALIZED DATA STORAGE RESOURCE 2 VM VM

DEPARTMENT OF MCA

Page 24

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

8.2 Use Case Diagram

create user

login
User

zooming on mirroring

cloud infrastructure

create duplicate key

storage and mirroring server

optimize multi snapshotting

insert the data

update data

copy the data

search data

DEPARTMENT OF MCA

Page 25

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

8.3 Class Diagram


Registration

user

user name() password() conform password() network()

insert into registration ()

cloud infrastructure

aggregate the storage and mirroring

aggregate the storage and mirroring() optimize multi snapshotting() zooming and mirroring()

sno() sname() age() address() cell number()

insert into cloud()

insert() copy() search() update()

zooming and mirroring


sno() sname() age() address() cell number()

optimize multi snapshotting sno() sname() age() address() cell number() no.of duplicate key() find() duplicate()

server() vm-server()

DEPARTMENT OF MCA

Page 26

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

8.4 Sequence Diagram

cloud

storage and mirroring server

database

optimize multi snapshotting

zooming on mirroring

user

enter the details

to store the cloud information server data store with database

after to perform copy,search and update process

view the data and to generate duplicate key

view the data and to generate duplicate key

finally data store with database

DEPARTMENT OF MCA

Page 27

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

8.5 Activity Diagram

User

cloud

no check

storage and mirroring server

yes

login

insert,copy, search and update the data

optimize multi snap shotting

zooming and mirroring

DEPARTMENT OF MCA

Page 28

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

9. DATABASE DESIGN
Database design is the process of producing a detailed data model of a database. This logical data model contains all the needed logical and physical design choices and physical storage parameters needed to generate a design in a Data Definition Language, which can then be used to create a database. A fully attributed data model contains detailed attributes for each entity. The term database design can be used to describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term database design could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the database management system (DBMS).

DEPARTMENT OF MCA

Page 29

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

Table Design/ IO Design


Table Name : tbl_login Autoincrement Yes

Column Name Id Name Password Email Address

Data Type Int Varchar Varchar Varchar Varchar

Length 20 20 20 50 100

Allow Nulls not Not not

Constraint Primary key

Table Name : tbl_UserDetails Autoincrement Column Name Data Type Length Allow Nulls Constraint Yes Uname Password Varchar Varchar 20 20 not Not Primary key

Compword Network

Varchar Varchar

20 20

not not

DEPARTMENT OF MCA

Page 30

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

Table Name : tbl_Stud Autoincrement Yes

Column Name Snumber Sname Age Address Phno

Data Type int Varchar int Varchar int

Length 20 30 3 40 10

Allow Nulls not Not not not

Constraint Primary key

Table Name : tbl_Cloud Autoincrement Column Name Data Type Length Allow Nulls Constraint Yes Uname varchar 20 not Primary key

Password

Varchar

30

Not

DEPARTMENT OF MCA

Page 31

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

10. IMPLEMENTATIONS

Implementation is the stage of the project when the theoretical design is turned out into a working system. Thus it can be considered to be the most critical stage in achieving a successful new system and in giving the user, confidence that the new system will work and be effective. The implementation stage involves careful planning, investigation of the existing system and its constraints on implementation, designing of methods to achieve changeover and evaluation of changeover methods.

The focus of this unit is on implementing a project. The first part considers how the activities of a project start. Although planning and action run side by side, it is often difficult to initiate action to progress the first tasks. Once things start to happen, the project enters a new stage.

Management of the project changes, from stimulating the initial action to monitoring and reviewing it in order to control the project's progress. Control systems are essential in managing a project of any size, to ensure that the project achieves its intended outcomes.

It is very rare for a project to run smoothly, however, and anyone managing a project can expect to have to keep up the momentum and to solve problems that arise. Good communications contribute a great deal to the process, and help all the stakeholders in developing a shared understanding of how the project is progressing. This unit addresses each of these considerations in turn.
DEPARTMENT OF MCA Page 32

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

1. Prepare the infrastructure. Many solutions are implemented into a production environment that is separate and distinct from where the solution was developed and tested. It is important that the characteristics of the production environment be accounted for. This strategy includes a review of hardware, software, communications, etc. In our example above, the potential desktop capacity problem would have been revealed if we had done an evaluation of the production (or real-world) environment. When you are ready for implementation, the production infrastructure needs to be in place. 2. Coordinate with the organizations involved in implementation. This may be as simple as communicating to your client community. However, few solutions today can be implemented without involving a number of organizations. For IT solutions, there are usually one or more operations and infrastructure groups that need to be communicated to ahead of time. Many of these groups might actually have a role in getting the solution successfully deployed. Part of the implementation work is to coordinate the work of any other groups that have a role to play. In some cases, developers simply failed to plan ahead and make sure the infrastructure groups were prepared to support the implementation. As a result, the infrastructure groups were forced to drop everything to make the implementation a success. 3. Implement training. Many solutions require users to attend training or more informal coaching sessions. This type of training could be completed in advance, but the further out the training is held, the less information will be retained when implementation rolls around. Training that takes place close to the time of implementation should be made part of the actual implementation plan.

DEPARTMENT OF MCA

Page 33

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

4. Install the production solution. This is the piece everyone remembers. Your solution needs to be moved from development to test. If the solution is brand new, this might be finished in a leisurely and thoughtful manner over a period of time. If this project involves a major change to a current solution, you may have a lot less flexibility in terms of when the new solution moves to production, since the solution might need to be brought down for a period of time. You have to make sure all of your production components are implemented successfully, including new hardware, databases, and program code.

5. Convert the data. Data conversion, changing data from one format to another, needs to take place once the infrastructure and the solution are implemented.

6. Perform final verification in production. You should have prepared to test the production solution to ensure everything is working as you expect. This may involve a combination of development and client personnel. The first check is just to make sure everything is up and appears okay. The second check is to actually push data around in the solution, to make sure that the solution is operating as it should. Depending on the type of solution being implemented, this verification step could be extensive.

7. Implement new processes and procedures. Many IT solutions require changes to be made to business processes as well. These changes should be implemented at the same time that the actual solution is deployed.

DEPARTMENT OF MCA

Page 34

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

8. Monitor the solution. Usually the project team will spend some period of time monitoring the implemented solution. If there are problems that come up immediately after implementation, the project team should address and fix them.

DEPARTMENT OF MCA

Page 35

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

11. SYSTEM TESTING


11.1 Introduction
Information Processing has undergone major improvements in the past two decades in both hardware and software. Hardware has decreased in size and price, while providing more and faster processing power. Software has become easier to use, while providing increased capabilities. There is an abundance of products available to assist both end-users and software developers in their work. Software testing, however, has not progressed significantly. It is still largely a manual process conducted as an art rather than a methodology. It is almost an accepted practice to release software that contains defects. Software that is not thoroughly tested is released for production. This is true for both off-the-shelf software products and custom applications. Software vendor and in-house systems developers release an initial system and then deliver fixes to the code. They continue delivering fixes until they create a new system and stop supporting the old one. The user is then forced to convert to the new system, which again will require fixes. The Incident Reports are then assigned a priority and the defects are fixed as time and budgets permit.

DEPARTMENT OF MCA

Page 36

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

11.2 Importance of Testing

Testing is difficult. It requires knowledge of the application and the system architecture. The majority of the preparation work is tedious. The test conditions, test data, and expected results are generally created manually. System testing is also one of the final activities before the system is released for production. There is always pressure to complete systems testing promptly to meet the deadline. Nevertheless, systems testing are important.

In mainframe when the system is distributed to multiple sites, any errors or omissions in the system will affect several groups of users. Any savings realized in downsizing the application will be negated by costs to correct software errors and reprocess information.

Software developers must deliver reliable and secure systems that satisfy the users requirements. A key item in successful systems testing is developing a testing methodology rather than relying on individual style of the test practitioner. The systems testing effort must follow a defined strategy. It must have an objective, a scope, and an approach. Testing is not an art; it is a skill that can be taught.

DEPARTMENT OF MCA

Page 37

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

Testing is generally associated with the execution of programs. The emphasis is on the outcome of the testing, rather than what is tested and how its tested. Testing is not a one-step activity; execute the test. It requires planning and design. The tests should be reviewed prior to execution to verify their accuracy and completeness. They must be documented and saved for reuse.

System testing is the most extensive testing of the system. It requires more manpower and machine processing time than any other testing level. It is therefore the most expensive testing level. It is critical process in the system development. It verifies that the system performs the business requirements accurately, completely, and within the required performance limits. It must be thorough, controlled and managed.

11.3 Testing Definitions

Software development has several levels of testing. Unit Testing Systems Testing Acceptance Testing

DEPARTMENT OF MCA

Page 38

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

11.3.1 Unit Testing The first level of testing is called unit testing which is done during the development of the system. Unit testing is essential for verification of the code produced during the coding phase. Errors were been noted down and corrected immediately. It is performed by the programmer. It uses the program specifications and the program itself as its source. Thus, our modules are individually tested here. There is no formal documentation required for unit-testing program.

11.3.2 Integration Testing The second level of testing includes integration testing. Here different dependent modules are assembled and tested for any bugs that may surface due to the integration of modules. Thus, the administrator module and various visa immigration modules are tested here.

11.3.3 Systems Testing The third level of testing includes systems testing. Systems testing verify that the system performs the business functions while meeting the specified performance requirements. It is performed by a team consisting of software technicians and users. It uses the Systems Requirements document, the System Architectural Design and Detailed Design Documents, and the Information Systems Department standards as its sources. Documentation is recorded and saved for systems testing.

DEPARTMENT OF MCA

Page 39

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

11.3.4 Acceptance Testing The final level of testing is the acceptance testing. Acceptance testing provides the users with assurance that the system is ready for production use; it is performed by the users. It uses the System Requirements document as its source. There is no formal documentation required for acceptance testing. Systems testing are the major testing effort of the project. It is the functional testing of the application and is concerned with following, 1. 2. 3. 4. Quality/standards compliance Business requirements Performance capabilities Operational capabilities Below are defined a few test cases which have been implemented for the various screens. The outputs have been registered and the required changes have been incorporated.

DEPARTMENT OF MCA

Page 40

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12. REPORTS/OUTPUT SCREENS


12.1 LOGIN:

DEPARTMENT OF MCA

Page 41

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.2 CREATE USER:

DEPARTMENT OF MCA

Page 42

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.3 CLOUD INFRASTUCTURE

DEPARTMENT OF MCA

Page 43

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.4 APPLICATION MAINTENANCE

DEPARTMENT OF MCA

Page 44

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.5 AGGREGATE THE STORAGE AND IMAGE MIRRORING FORM 1

DEPARTMENT OF MCA

Page 45

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.6 AGGREGATE THE STORAGE AND IMAGE MIRRORING FORM 2

DEPARTMENT OF MCA

Page 46

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.7 APPLICATION ACCESS PATTERN

DEPARTMENT OF MCA

Page 47

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.8 OPTIMIZEMULTISNAPSHOTTING

DEPARTMENT OF MCA

Page 48

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.9 ZOOM ON MIRRORING FORM 1

DEPARTMENT OF MCA

Page 49

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

12.10 ZOOM ON MIRRORING FORM 2

DEPARTMENT OF MCA

Page 50

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

13. CONCLUSIONS

The cloud computing becomes increasingly popular, efficient management of VM images, such as image propagation to compute nodes and image snapshotting for check pointing or migration, is critical. The performance of these operations directly affects the usability of the benefits offered by cloud computing systems. To introduced several techniques that integrate with cloud middleware to efficiently handle two patterns: multideployment and multi snapshotting.

DEPARTMENT OF MCA

Page 51

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

14. FUTURE ENHANCEMENT

The propose a lazy VM deployment scheme that fetches VM image content as needed by the application executing in the VM, thus reducing the pressure on the VM storage service for heavily concurrent deployment requests. Further-more, we leverage object versioning to save only local VM image differences back to persistent storage when a snap-shot is created, yet provide the illusion that the snapshot is a different, fully independent image. This has two important benefits. First, it handles the management of updates independently of the hypervisor, thus greatly improving the portability of VM images and compensating for the lack of VM image format standardization. Second, it handles snapshot ting transparently at the level of the VM image repository, greatly simplifying the management of snapshots. With respect to multisnapshotting, interesting reductions in time and storage space can be obtained by introducing reduplications schemes. We also plan to fully integrate the current work with Nimbus and explore its benefits for more complex HPC applications in the real world.

DEPARTMENT OF MCA

Page 52

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

15. BIBLIOGRAPHY

Good Teachers are worth more than thousand books, we have them in Our Department.

15.1 Abbreviations

OOPS TCP/IP JDBC EIS

Object Oriented Programming Concepts Transmission Control Protocol/Internet Protocol Java Data Base Connectivity Enterprise Information Systems

BIOS Basic Input/Output System RMI JNDI Remote Method Invocation Java Naming and Directory Interface

DEPARTMENT OF MCA

Page 53

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

15.2 References Made From


1. M. Armbrust, A. Fox, R. Griffith, A. Joseph, R. Katz,A. Konwinski, G. Lee, D. Patterson, A. Rabkin,I. Stoica, and M. Zaharia. A view of cloud computing. Commun. ACM, 53:5058, April 2010.

2. Bar-Noy and S. Kipnis. Designing broadcastingalgorithms in the postal model for message-passingsystems. In SPAA 92: Proceedings of the 4th AnnualACM Symposium on Parallel Algorithms andArchitectures, pages 1322, New York, 1992. ACM.

3. P. H. Carns, W. B. Ligon, R. B. Ross, and R. Thakur.Pvfs: A parallel file system for Linux clusters. InProceedings of the 4th Annual Linux Showcase andConference, pages 317327, Atlanta, GA, 2000.USENIX Association.

4. B. Claudel, G. Huard, and O. Richard. Taktuk,adaptive deployment of remote executions. In HPDC09: Proceedings of the 18th ACM InternationalSymposium on High Performance DistributedComputing, pages 91100, New York, 2009. ACM.

5. G. DeCandia, D. Hastorun, M. Jampani,G. Kakulapati, A. Lakshman, A. Pilchin,S. Sivasubramanian, P. Vosshall, and W. Vogels.Dynamo: Amazons highly available keyvalue store.In SOSP 07: Proceedings of 21st ACM SIGOPSSymposium on Operating Systems Principles, pages205220, New York, 2007. ACM.
DEPARTMENT OF MCA Page 54

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

6. M. Gagne. Cooking with Linuxstill searching for theultimate Linux distro? Linux J., 2007(161):9, 2007.

7. J. G. Hansen and E. Jul. Scalable virtual machinestorage using local disks. SIGOPS Oper. Syst. Rev.,44:7179, December 2010.

8. M. Hibler, L. Stoller, J. Lepreau, R. Ricci, andC. Barb. Fast, scalable disk imaging with Frisbee. InATC 03: Proceedings of the 2003 USENIX AnnualTechnical Conference, pages 283296, San Antonio,TX, 2003.

9. Y. Jegou, S. Lanteri, J. Leduc, M. Noredine,G. Mornet, R. Namyst, P. Primet, B. Quetier,O. Richard, E.-G. Talbi, and T. Irea. Grid5000: Alarge scale and highly reconfigurable experimental gridtestbed. International Journal of High

PerformanceComputing Applications, 20(4):481494, November2006.

10. K. Keahey and T. Freeman. Science clouds: Early experiences in cloud computing for scientific applications. In CCA08: Proceedings of the 1st Conference on Cloud Computing and Its Applications, 2008.

DEPARTMENT OF MCA

Page 55

Going Back and Forth: Efficient Multi deployment and Multisnapshotting on Clouds

15.3 Sites Referred:

http://java.sun.com http://www.sourcefordgde.com http://www.networkcomputing.com/ http://www.roseindia.com/ http://www.java2s.com/

DEPARTMENT OF MCA

Page 56

You might also like