You are on page 1of 3

Why I Like Virtualization (And Why Hardware Sucks) : Bob Plankers, The Lone Sysadmin

Current Article

Why I Like Virtualization (And Why Hardware Sucks)


By Bob Plankers on Jun 18, 2007 in Featured, Virtualization

I was asked why I like virtualization and why chroot jails arent a better way to do things, at least on UNIX-like OSes. To figure out why I like virtualization, lets start with what I dont like about hardware: Failures. Something is always going wrong, whether its a fan, disk, power supply, etc. More servers means more failures. We buy warranties to help with this, but warranties cost money. It takes time to deal with failed components, too. Firmware. It is hard to update firmware levels. Every device is different, and a bunch of update methods end up requiring you to go out to the box with a USB stick or a floppy disk. That takes a lot of time, usually at times of the day Id rather be somewhere else (like sleeping). Cables. I hate cabling. It costs a lot of money, a foot at a time. It gets tangled. It gets unplugged. It gets mislabeled. It takes a lot of time and vigilance to do right. KVM. My KVM system is IP based. It uses Cat5 cable and has USB dongles that attach to the hosts. It costs a ton of money. The dongles have strange incompatibilities which make it an adventure to connect a server. Its also another cable to manage, another system to maintain, another drain on your time. Racks. Racking a server means I have to go to my data center, which may not be in the same building or city as me. I have to worry about available rack space, power in the rack, and cooling. I have to worry about two post or four post racks, which type of holes in the rack, and whether I have the right screws. Racks cost a lot of money, too. Power. I hate power cords. They get tangled, messed up, unplugged. We order short power cords to help with that, but those cost money. To keep things running we have a UPS. UPSes are expensive and require maintenance. Speaking of money, I hate paying power bills, too. Cooling. Cooling requires equipment, which in turn requires maintenance. It also requires power. Maintenance and power require money. Did I mention that I dont like giving other people money? I want to keep it and buy myself cool things. :-) It basically comes down to money and time. Time is really money, though, because to get more time you have to hire someone to help. So how do you reduce these costs? Simple: have less hardware. Why cant we have less hardware? Applications arent good at sharing. They require specific versions of other software, different from what the other applications require. For example, an application written in Perl might require DBD::Oracle 1.14, another 1.19. Now I need two different copies of it and it isnt simple anymore, especially if the applications assume that DBD::Oracle will be installed in /usr/lib/perl5. It can be hard to figure out performance problems on a machine that is doing a lot of different things. It is hard to tune a machine for performance with multiple different applications. Do I tune for Apache or MySQL? Customers have wildly different security requirements. Customers have wildly different maintenance window needs. Customers want to build clusters. How do you share a cluster with an application that isnt clustered? Customers just dont want to share. They dont want anything to do with another project or customer. They want their own machines and want to have their way with them. Coordination between customers is sometimes impossible. Customers want separate development, test, QA, and production environments. They want to be able to do load testing and other crazy things without impacting or being impacted by someone elses software. Unfortunately, development, test, and QA environments will sit mostly idle over their lifespans.

http://lonesysadmin.net/2007/06/18/why-i-like-virtualization-and-why-hardware-sucks/[1/19/2011 5:41:50 PM]

Why I Like Virtualization (And Why Hardware Sucks) : Bob Plankers, The Lone Sysadmin

Approaches like chroot jails are great for single, well-known applications. The more applications you need to run in a chroot jail the harder it gets to maintain, though. You end up needing second, third, fourth copies of libraries, binaries, etc., especially if youre trying to chroot interactive users. Automatic patching processes like yum, up2date, etc. dont update your jails so you have to do that manually. The jail doesnt address things like differing maintenance windows, or any of the non-technical problems of sharing a machine. This approach may have performance benefits, but to be honest performance is usually not a big factor. Most of the problems in sharing a machine are actually problems with sharing the operating system. Virtualization decouples the hardware from the operating system, and because of that we can solve all the problems of sharing a machine by choosing not to share at all. No chroot jails to maintain, no worries about versions of software, no endless coordination meetings just to schedule a reboot of the server. So why do I like virtualization? It lets me get rid of hardware but doesnt force me to manage complex situations that arise from a shared OS. For the operating system it is business as usual, which means relative simplicity and well-understood processes. I like that.

Related Posts
1. Why VMmark Sucks 2. Keys To Virtualization Success 3. Vendors Who Dont Realize Virtualization Is Here To Stay 4. CIHost Sucks and I Need Another Host 5. Vista Officially Sucks

Trackback URL

6 Comment(s)
By Ian on Jun 19, 2007 | Reply Excellent post. For my situation, seeing as I work for a K-12 school district, its very difficult to justify the up front costs for virtualization. Between the hardware costs for a beefy Dell 6000 series box and the high costs of licensing Vmware, its too much for my situation. If our district was twice the size it is, we would be getting to the point where virtualization might be doable. My only real option is to wait and see what Novell does with Xen in the next couple of years. That might allow me an avenue to explore the technology and save some bucks on power and cooling. Im maxing out the cooling right now with one rack of servers and we managed to blow a 20 amp fuse out earlier this year and all of our machines lost power when we tripped the breaker on the generator. Sorry for the rambling ;) By pooya on Jun 19, 2007 | Reply Thanks Bob. Very nice article. I dont know about the state of the current virtualization softwares. If the open-source and free (Xen/QEMU?) ones are good enough and whether they support migrating a live system to another machine. If they are already like that then thats awesome. By nickyp on Jun 20, 2007 | Reply Pooya: Xen is pretty good and reliable. I trust a vendor like Redhat to provide us with a good (and stable) Xen environment. Live-migration is supported, but that requires some kind of shared storage and sometimes even supports booting from a shared drive, so the hardware can remain diskless. But shared storage management will make the deployment more complex of course. But youre reading the Lone Sysadmin and already know that ;-) Another issue is that only one shared storage node becomes the single-point-of-failure for all your VMs. How do you treat this SPOF in your environments Bob? I guess everything has its pros and cons.

http://lonesysadmin.net/2007/06/18/why-i-like-virtualization-and-why-hardware-sucks/[1/19/2011 5:41:50 PM]

Why I Like Virtualization (And Why Hardware Sucks) : Bob Plankers, The Lone Sysadmin

By Bob Plankers on Jun 20, 2007 | Reply Right now storage is the Achilles Heel of virtualization. There is no way around your storage as a single point of failure :-( By pooya on Jun 20, 2007 | Reply Well, I guess in theory storage can also be distributed. Hmm.. what was that? Coda? Lustre? I should look into them to see if therere reliable. By James Grinter on Jun 22, 2007 | Reply Ive just starting reading the short paper, about cooling-efficiency, being presented at Usenix this week [Cullen Bash and George Forman: "Cool Job Allocation..."]. They suggest some neat ideas, for future data centres, such as moving around VMs to dynamically migrate jobs to improve cooling efficiency.

Post a Comment
Please note that while in principle I don't mind promotion of a non-personal web site or blog in the fields below, any comments that are off-topic, derogatory, or spam-like will likely be removed at my discretion. Name (required)

E-mail (will not be published) (required)

Website

Submit Comment

http://lonesysadmin.net/2007/06/18/why-i-like-virtualization-and-why-hardware-sucks/[1/19/2011 5:41:50 PM]

You might also like