You are on page 1of 9

Types of virtualization Para-virtualization VM does not simulate hardware Use special API that a modified guest OS must use

use Hypercalls trapped by the Hypervisor and serviced Xen, VMWare ESX Server OS-level virtualization OS allows multiple secure virtual servers to be run Guest OS is the same as the host OS, but appears isolated apps see an isolated OS Solaris Containers, BSD Jails, Linux Vserver Application level virtualization Application is gives its own copy of components that are not shared (E.g., own registry files, global objects) - VE prevents conflicts JVM

Types of Virtualization Emulation VM emulates/simulates complete hardware Unmodified guest OS for a different PC can be run Bochs, VirtualPC for Mac, QEMU Full/native Virtualization VM simulates enough hardware to allow an unmodified guest OS to be run in isolation Same hardware CPU

IBM VM family, VMWare Workstation, Parallels,

Types of virtualization
[edit]Hardware Main article: Hardware virtualization Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with the Ubuntu Linuxoperating system; Ubuntu-based [1][2] software can be run on the virtual machine. In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager. Different types of hardware virtualization include: 1. Full virtualization: Almost complete simulation of the actual hardware to allow software, which typically consists of a guest operating system, to run unmodified. 2. Partial virtualization: Some but not all of the target environment is simulated. Some guest programs, therefore, may need modifications to run in this virtual environment. 3. Paravirtualization: A hardware environment is not simulated; however, the guest programs are executed in their own isolated domains, as if they are running on a separate system. Guest programs need to be specifically modified to run in this environment. Hardware-assisted virtualization is a way of improving the efficiency of hardware virtualization. It involves employing specially designed CPUs and hardware components that help improve the performance of a guest environment. Hardware virtualization is not the same as hardware emulation. In hardware emulation, a piece of hardware imitates another, while in hardware virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the entire computer. Furthermore, a hypervisor is not the same as an emulator; both are computer programs that imitate hardware, but their domain of use in language differs. See also: Mobile virtualization [edit]Desktop Main article: Desktop virtualization Desktop virtualization is the concept of separating the logical desktop from the physical machine. One form of desktop virtualization, virtual desktop infrastructure (VDI), can be thought as a more advanced form of hardware virtualization. Rather than interacting with a host computer directly via a

keyboard, mouse, and monitor, the user interacts with the host computer using another desktop computer or a mobile device by means of a network connection, such as a LAN, Wireless LAN or even the Internet. In addition, the host computer in this scenario becomes a server computer capable of hosting multiple [3] virtual machines at the same time for multiple users. As organizations continue to virtualize and converge their data center environment, client architectures also continue to evolve in order to take advantage of the predictability, continuity, and quality of service delivered by their Converged Infrastructure. For example, companies like HP and IBM provide a hybrid VDI model with a range of virtualization software and delivery models to improve upon the limitations [4] of distributed client computing. Selected client environments move workloads from PCs and other devices to data center servers, creating well-managed virtual clients, with applications and client operating environments hosted on servers and storage in the data center. For users, this means they can access their desktop from any location, without being tied to a single client device. Since the resources are centralized, users moving between work locations can still access the same client environment with [4] their applications and data. For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to more quickly respond to the changing needs of the user [5][6] and business. Another form, session virtualization, allows multiple users to connect and log into a shared but powerful computer over the network and use it simultaneously. Each is given a desktop and a personal folder in [3] which they store their files. With Multiseat configuration, session virtualization can be accomplished using a single PC with multiple monitors keyboards and mice connected. Thin clients, which are seen in desktop virtualization, are simple and/or cheap computers that are primarily designed to connect to the network. They may lack significant hard disk storage space,RAM or even processing power, but many organizations are beginning to look at the cost benefits of eliminating thick client desktops that are packed with software (and require software licensing fees) and making [7] more strategic investments. Desktop virtualization simplifies software versioning and patch management, where the new image is simply updated on the server, and the desktop gets the updated version when it reboots. It also enables centralized control over what applications the user is allowed to have access to on the workstation. Moving virtualised desktops into the cloud creates hosted virtual desktops (HVD), where the desktop images are centrally managed and maintained by a specialist hosting firm. Benefits include scalability and [8] the reduction of capital expenditure, which is replaced by a monthly operational cost. Virtualized desktop is useful in a secret / classified environment. If using a thick-client, the hard-drive needs to be removed and locked up in a safe every night. With thin-clients, the machine does not contain classified data once the user is logged off, since the content is held on the centralized server. This is especially useful for classified conference rooms, or other situations where different people are using the same computer, so there is not individual-accountability for the hard-drive. [edit]Software Operating system-level virtualization, hosting of multiple virtualized environments within a single OS instance.

Application virtualization and workspace virtualization, the hosting of individual applications in an environment separated from the underlying OS. Application virtualization is closely associated with the concept of portable applications. Service virtualization, emulating the behavior of dependent (e.g., third-party, evolving, or not implemented) system components that are needed to exercise an application under test (AUT) for development or testing purposes. Rather than virtualizing entire components, it virtualizes only specific slices of dependent behavior critical to the execution of development and testing tasks.

[edit]Memory Memory virtualization, aggregating Random Access Memory resources from networked systems into a single memory pool. Virtual memory, giving an application program the impression that it has contiguous working memory, isolating it from the underlying physical memory implementation. Memory overcommitment, allows a computer to grant more memory to the virtual machines running on it than is installed on the physical computer.

[edit]Storage Storage virtualization, the process of completely abstracting logical storage from physical storage. Distributed file system Storage hypervisor

[edit]Data Data virtualization, the presentation of data as an abstract layer, independent of underlying database systems, structures and storage. Database virtualization, the decoupling of the database layer, which lies between the storage and application layers within the application stack over all.

[edit]Network Network virtualization, creation of a virtualized network addressing space within or across network subnets.

Advanced Capabilities
[edit]Snapshotting Main article: Snapshot (computer storage) A snapshot is the state of a virtual machine, and, generally, its storage devices, at an exact point in time. Snapshots are "taken" by simply giving an order to do so at a given time, and can be "reverted" to on demand, with the effect that the VM appears (ideally) exactly as it did when the snapshot was taken. The capability is, for example, useful as an extremely rapid backup technique, prior to a risky operation. It also provides the foundation for other advanced capabilities (discussed below). [edit]Detailed explanation Virtual machines frequently use virtual disks for storage. In a very simple case, for example, a 10 gigabyte hard disk is simulated with 10 gigabyte flat file. Any requests by the VM for a location on its

physical disk (which does not "exist" as an actual physical object in and of itself) are transparently translated into an operation on the corresponding file (which does exist as part of an actual storage device). Once such a translation layer is present, however, it is possible to intercept the operations and send them to different files, depending on various criteria. In a snapshotting application, every time a snapshot is taken, a new file is created, and used as an overlay. Whenever the VM does a write, the date is written to the topmost (current) overlay; whenever it does a read, each overlay is checked, working from the most recent back, until the most recent write to the requested location is found. In this manner, the entire stack of snapshots is, subjectively, a single coherent disk. The RAM memory of the system can be managed in a similar way (though in the simplest systems, snapshots are disk-only, and the VM must be restarted). Generally, referencing a snapshot means to reference that snapshot, and all prior snapshots on which it is based, down to the initial state when the VM was created. To revert to a prior snapshot simply means to restart (or resume, if a memory state, processor state, and peripheral state snapshots are available in addition to disk states) the machine using only the overlays available up to a specific exact point in time (when the snapshot was taken, which resulted in new overlay files, rendering the ones that had been in use an instant before read-only), plus a new set of overlays to hold the current running state of the machine.

History of Virtualization
Virtualization was first developed in the 1960s to partition large, mainframe hardware for better hardware utilization. Today, computers based on x86 architecture are faced with the same problems of rigidity and underutilization that mainframes faced in the 1960s. VMware invented virtualization for the x86 platform in the 1990s to address underutilization and other issues, overcoming many challenges in the process. Today, VMware is the global leader in x86 virtualization, with over 400,000 customers, including 100% of the Fortune 100. In the Beginning: Mainframe Virtualization Virtualization was first implemented more than 30 years ago by IBM as a way to logically partition mainframe computers into separate virtual machines. These partitions allowed mainframes to multitask: run multiple applications and processes at the same time. Since mainframes were expensive resources at the time, they were designed for partitioning as a way to fully leverage the investment.

The Different Types of Virtualization


This can be where confusions creep in so it is important to remember the fundamental concept of virtualization (and again I refer to the article, The Fundamental Concept of Virtualization, on this website): It represents an abstraction from physical resources. All uses of virtualization are centered around this concept. There are three major types of virtualization: 1. Server Virtualization This type is where most of the attention is focused right now in the world of virtualization and is where most companies begin an implementation of this technology. Thats not very shocking in light of the fact that server sprawl has become a very large and legitimate problem in enterprises throughout the world. Where a company is simply running out of room in which to place all of their servers, this type of virtualization would of course get viewed with strong interest. Because each server typically serves one function (i.e., mail server, file server, Internet server, enterprise resource planning server, etc.), with each server using only a fraction of its true processing power, server virtualization breaks through the one application, one server barrier and facilitates the consolidation of numerous servers into one physical server. This equates to (a) less physical servers required, and (b) 70 to 80 percent or higher utilization of existing hardware as opposed to the previous 10 to 15 percent. Server virtualization lets one server do the job of multiple servers by sharing the resources of a single server across multiple environments. The software lets a company host multiple operating systems and multiple applications locally and in remote locations, freeing users from physical and geographical limitations. How are the servers moved over? Most, if not all, virtualization solutions offer a migration tool to take an existing physical server and make a virtual hard drive image of that server to the driver stack. Then that server will boot up and run as a virtual server. There is no need to rebuild servers or manually reconfigure them as a virtual server. Without a doubt, the greatest advantage of server virtualization is cost. In addition to energy savings and lower capital expenses due to more efficient use of hardware resources, you get high availability of resources, better management, and improved disaster-recovery processes with a virtual infrastructure. You save on physical space, reduce power consumption and the

need for as much cooling, and are able to rapidly deploy a new application without ordering new hardware. There are three different methods that can be employed under the server virtualization category but Im not going to get into them right now because Im trying very hard to be as simple about this as I can possibly be. Whichever method is used, the goal of server consolidation is the same. 2. Client (or Desktop) Virtualization This type of virtualization technology has to do with a client (a workstation desktop or laptop pc an end user machine). These can be very difficult for a systems administrator to manage. Whereas any machine in the companys data center has very strict procedures regarding what gets loaded on them and when they get updated with new software releases, it is often a quite different scene when it comes to the end-user machine. Even if there are supposed to be procedures followed for the above actions on an end-user machine, those procedures are often not followed or paid much heed. A CD or DVD slot makes it easy for non-approved software to be installed that can create problems on that machine. Quite aside from that, end-user machines are more susceptible to malware in numerous ways via e-mail viruses, unwitting spyware downloads, etc. Last but not least, most end-user machines run on Microsoft Windows which is well known for attracting attacks from hackers and cybercriminals. IT has to not only deal with all those problems but also attend to the normal problems inherent in client machines: keeping approved software up-to-date, patching the OS, keeping virus definitions current, et al. All of these factors make an IT guys job quite challenging. So client virtualization, with the hope of easier client machine management and security, attracts the interest of IT. Because there is no single solution for end-user computing, there is more than one method or model that can be employed: A. Remote (Server-Hosted) Desktop Virtualization In this model, the operating environment is hosted on a server in the data center and accessed by the end user across a network. B. Local (Client-Hosted) Desktop Virtualization In this model, the operating environment runs locally on the users physical pc hardware and involves multiple flavors of client-side virtualization techniques that can monitor and protect the execution of the end user system.

C. Application Virtualization This is a method of providing a specific application to an end user that is virtualized from the desktop OS and which is not installed in a traditional manner. An application can be installed and/or executed locally within a container that controls how it interacts with other system and application components. Or an application can be isolated in its own virtualized sandbox to prevent interaction with other system and application components. Or applications can be streamed across a network. Or applications can be delivered across the network to a web browser with most processing executed on a centralized web server. This latter method will support almost any user, with no installation requirement, on almost any platform, in any location, but it only supports a limited set of applications. 3. Storage Virtualization Storage virtualization is a concept in System Administration, referring to the abstraction (separation) of logical storage (virtualized partitions of stored data) from physical storage (storage devices that hold, spin, read and write magnetic or optical disks such as CD, DVD, or even a hard disk drive, etc.). This separation allows the Systems Admin increased flexibility in how they manage storage for end users. Virtualization of storage helps achieve location independence by abstracting the physical location of the data. The virtualization system presents to the user a logical space for data storage and itself handles the process of mapping it to the actual physical location. There are three basic approaches to data storage: A. Direct-Attached Storage (DAS) This is the traditional method used in data storage where hard drives are attached to a physical server. Because this method is easy to use but hard to manage, virtualization technology is causing organization to have a second thought with regard to its viability. B. Network-Attached Storage (NAS) This is a machine that resides on your network and provides data storage to other machines. It can be thought of as the first step toward storage virtualization. This approach provides a single source of data, facilitating data backup. By collecting your data in one place, it also avoids the problem of multiple servers needing to access data located on another server. C. Storage Area Network (SAN) This ultra-sophisticated approach deploys specialized hardware and software to transform mere disk drives into a data storage solution that transfers data on its own highperformance network.

Companies shift over to a SAN when they recognize that corporate data is a key resource that must be available 24/7 and needs to be conveniently managed. The price tag for this approach is very high indeed.

You might also like