Technology
Before you can have any sort of serious conversation about these two systems, you must first understand their underlying technologies, as well as their overall approach. At present, there are two primary approaches to virtualization: paravirtualization and full virtualization. I'll start with full virtualization, both because it came along first and because it's a little easier to understand.
In a fully virtualized system, the virtualized operating system is presented with a completely emulated machine comprised of emulated hardware devices. This "presentation layer" typically runs on top of a complete operating system, such as Windows or Linux. The presentation layer is completely consistent from virtual machine to virtual machine, regardless of the underlying hardware. So, for example, a virtual machine will always see a Realtek network card, standard SCSI controller, etc., etc. This allows drivers to be consistent from virtual machine to virtual machine, which nets you flexibility, consistency, and ease of installation (as well as stability, as was mentioned in a prior post.) Best of all, because all of the hardware is emulated, you can run virtually any operating system on it, unmodified.
There are some drawbacks to this approach, however. First, since the hardware is emulated, there is a good deal of translation taking place, which costs performance. Essentially, there are two layers of drivers translating requests between the software and the hardware. For example, lets say a software package on a virtual machine wants to send a packet out to the network interface. It sends a standard request to its operating system, which in turn forwards that request to the driver for the emulated network card. The driver then converts the request from software language to hardware language, and passes that request down the stack to the presentation layer. The presentation layer takes that hardware request for the emulated hardware, converts it back to a software request, and hands it off to the core OS running on the hardware. The core OS then hands the request to the real hardware driver, which translates it again to hardware language, and finally passes it to the actual hardware. Basically, I/O flows up and down the stack, as shown below (you'll need to view this in HTML to see the table properly):
Virtual Machine Unmodified OS |
Virtual Machine Drivers |
Presentation Layer (Translation) |
Core OS (Windows or Linux) |
Core OS Drivers |
Hardware |
There has, of course, been a lot of work on optimizing this process by the various vendors which has resulted in significant reductions in the time required to traverse this stack, but the stack remains the primary source of latency.
In addition, unmodified operating systems expect to have unrestricted access to the hardware for some functions. While the technical details of execution rings, ring 0, and the like are beyond the scope of this email - suffice it to say these calls must be "trapped" by the presentation layer and translated to prevent exceptions from occurring.
Paravirtualization is a new(er) approach to virtualizing hardware. In a paravirtualized environment, the first thing that is loaded when a system boots up is a thin layer called the "hypervisor," then the Core OS is loaded, followed by the virtual machines. It's important to note that both the core OS and virtual machines sit on top of this hypervisor. Essentially, the hypervisor eliminates much of the translation taking place in fully virtualized systems. It is difficult to describe this without oversimplifying it, but basically think of the hypervisor as a queue into which hardware requests are prioritized and placed, kinda like a "tube," to use the terminology of a now famous senator. Essentially, the process flows like this, using the same example from above: the virtual machine sends a standard I/O request to its operating system. Rather than translating that request to an emulated interface driver, the request is dropped into the hypervisor queue, and prioritized according to user defined policies. The request passes down the queue to the core OS, which hands the request off to the hardware driver for translation from software language to hardware language. See crude table below for a visual:
Core OS | Virtualized OS (Modified) | Virtualized OS (Modified) |
Drivers | ||
Hypervisor | ||
Hardware |
This obviously eliminates a significant amount of latency - but there's a catch. Notice the word "modified" under the virtualized OSs above. In other words, the OS must be aware that it is being virtualized (or paravirtualized) to take advantage of these improvements.
VMWare
VMware has been on the market for some time, and is a fully virtualized system. Lately they've been referring to it as a hypervisor (because that's the cool word of the day in virtualization) but it really isn't (for the most part - more on that later.) I think they are justifying the referral in the sense that their system is a layer between the hardware and the virtual machines. VMWare is available in a variety of "versions" (code for license schemes and costs) although they really only have two engines: a workstation class engine and an enterprise engine.
The workstation class engine is what's underneath all the "workstation" versions as well as the free edition, which used to be called GSX. These all install on a variety of OSs, from Linux, to Windows, to Mac OS, and essentially run as described above. There have been some minor optimizations, but basically you can expect some fairly significant latency in I/O operations. Their enterprise engine, called ESX and/or Virtual Infrastructure (VI) is a whole different animal. It is a highly optimized version of Linux that is designed specifically to run VMWare software, and nothing else. ESX is significantly faster than their workstation class offerings, by a wide margin. Their prior release (version 2) pretty much used the same structure as described above, with carefully optimized drivers to improve performance. Typical average overhead for an ESXv2 system was roughly 15 percent over bare metal (ie non virtualized) hardware performance, although that number tended to increase a bit with more virtual machines.
ESXv3/Virtual Infrastructure appears to be a whole different animal. They have taken advantage of the latest in hardware assisted virtualization, as well as adding some "hypervisor like" optimizations, which allow it to eek out even better performance. The most recent tests indicate that it can approach 90 percent of bare metal performance when under average workload.
VMWare really shines in its management tools. It offers a broad range of tools for everything from basic systems management to rapid provisioning and prototyping to policy and performance based resource allocation. Overall it's quite a strong solution and no IT administrator will find it lacking in capability.
That said, there are, of course, drawbacks. First and foremost is cost. If you want all the flexibility described in my previous email, such as shared storage, live migration, high availability, etc., you have to buy Virtual Infrastructure Enterprise. This is typically sold per cpu socket (2 at a time,) with the education price in the neighborhood of $1650 per socket (ie $3300 for your first license.) They require you to purchase support as well at a price of $615 per year, in three year chunks (ie $1850 for each 2 socket server.)
You also have some hardware compatibility issues. The list of supported hardware for a VI3 system is pretty good, but you are relatively stuck with their way or the highway. If you are going to use shared storage, you have to use approved hardware, and you have to connect it their way. While this may not be such a bad thing, it can be limiting. For example, hardware assisted iSCSI support is a no go at this time, although I hear they are working on it. In addition, if you are going to use a shared file system, you must use VMWare's own VMFS software. This can set you up for a vendor lock-in situation that you may not want to be in say, 5 years from now. There are also some physical limitations, such as no live migrations from AMD to Intel hardware, which can be somewhat limiting.
Xen
Xen is the relative newcomer to the block, although it has actually been in development for many years. It is an open-source solution, meaning that its code has been free to be modified and improved upon by anyone in the community - and boy have they ever. Developers from AMD, Intel, HP, IBM, Novell, Red Hat, and even Microsoft are all involved in the project. It is supported by EVERY major OS, server and silicon vendor, including Intel, AMD, Cisco, Dell, Egenera, HP, IBM, Mellanox, Network Appliance, Novell, Red Hat, SGI, Sun, Unisys, Veritas, Voltaire, XenSource, and soon, Microsoft.
Xen, in its present state, is a hybrid capable of handling both paravirtualized OSs and unmodified (ie fully virtualized) OSs. Linux, Solaris, BSD, and a variety of other Unix like OSs are all available and/or modifiable to be "virtualization aware." The next (big) kid on the block will be Microsoft, who has partnered with both Xensource (the company formed to manage the Xen project) as well as Novell to make sure that the next version of Windows Server and it's built in virtualization technology will be completely compatible with the Xen hypervisor. This means that you will be able to easily migrate virtual machines to/from Windows Virtual Server and a Xen based host without modification, as demonstrated a few weeks back at Novell's Brainshare conference. Microsoft has also chosen paravirtualization for their servers and systems. It's important to note, however, that Microsoft does not intend to support Linux running on a Windows virtual server.
The Xen technology is very well designed, is extraordinarily fast and scalable, and is capable of all the enterprise class functionality described in my prior post. And because it is open source, you gain all the benefits of open source hardware and systems support, dramatically increasing its flexibility. For example, you can use any Linux supported storage system and file system, including fiber channel, iSCSI (both software and hardware initiated,) ATA over ethernet, GFS, OCFS, NFS, CLVM, raw partitions, and on and on.
But as Xensource CTO Simon Crosby says, "the Xen hypervisor is an engine, and not a car. A powerful engine that needs a great gearbox, shocks, tires and the body to go with it."
While it is fully manageable from the command line, management solutions are where the vendors come in, and there is no shortage of those (which is a good thing.) There are several free management tools from the likes of Red Hat, Qlusters, BixData, and the like. Red Hat Enterprise Linux 5 (RHEL5) comes with integrated Xen virtualization, including all the tools you will need to manage it, through the simple click of a checkbox at install time. Novell will include a good set of tools in Suse Linux Enterprise Server 10sp1. And Xensource has their own line of tools for managing a Xen based environment, including their flagship product, XenEnterprise.
In terms of performance, Xen handily outmatches VMWare when running paravirtualized OSs, and generally matches VMWare when running Windows unmodified (see caveat below.) And cost ranges anywhere from free for the open-source engine and tools, to $325 per socket.
It's also a bit more scalable the VMWare. You can have up to 32 virtual CPUs (ie virtual SMP) to any virtual machine, vs. 4 on VMWare, and it supports up to 64Gb of RAM on a single piece of hardware. You can even add and remove CPUs and RAM on the fly to running virtual machines.
There are, of course, some drawbacks to Xen as well. First, in order to achieve maximum performance on unmodified OSs (ie Windows,) special "paravirtualized" drivers are required for the OS. These drivers essentially intercept the software I/O requests, and hand them off to the hypervisor in a similar fashion to a paravirtualized OS. Without them, you can expect similar performance to the workstation editions of VMWare, ie lots of overhead. While this may be fine for a smaller smaller, low I/O application like a web app server, you wouldn't want to run anything intensive on it without them. At present, these drivers are part of the the Xensource offerings and Novell's SLES10 service pack 1 beta - RHEL5 has no paravirtualized drivers as of yet. Short term, this limits your choices, but in the long term, when the fully virtualized Windows server ships later this year, it will no longer be an issue. Keep in mind that this no different from VMWare - you are installing specialized drivers either way.
There are also some limitations in the management tools. Xensource's tools are excellent, but do not yet contain all of the features of VMWare's. For example, live migrations are not yet possible using Xensource's management console. This functionality is expected next quarter. Rapid provisioning, prototyping, and cloning are all there, but some of the more advanced policy based resource allocation features in VMware VI3 are not (yet.) Red Hat is building a tool called virt-factory that will add much of this functionality, as is Xensource.
Conclusion
For those who still aren't sure, Saugus uses Xen for its virtualization needs. We run Xen on Red Hat for our Linux servers, and are using XenEnterprise for our Windows servers. We believe that Xen offers the best mix of price, performance, scalability, and long term viability. And with XenEnterprise, it truly is "10 minutes to Xen." Go to http://www.xensource.com and download a free 2 month trial of XenEnterprise and give it a try.
A good summary. As you propably know, Xen 3.1 has been released: http://www.xensource.com/download/index_oss.html
Thanks for pointing out virt-factory! Need to try that once Fedora 7 is out.
Yes, we're very excited about Xen 3.1, especially running 32 bit guests on 64 bit hosts, and HVM migration. I'm not sure if virt-factory will make FC7, but you can find it on Red Hat's emerging technologies site at http://et.redhat.com
Yes, in fact when I demo Xen, I migrate a vm from a Macbook Pro (Intel Core Duo) to an HP Laptop with a AMD Turion 64 chip in it. Works great.
One of the new developments is running the Xen host in 64-bit, with 32-bit vms on top of it. We have been experimenting with this and it appears to work well.
So can Xen actually perform live migrations between different generations of Intel and/or AMD processors? Can it actually migrate from Intel platforms to AMD? I ask only because I see so much on the Virtualization forums regarding the difficulty in doing so with VMWare, but I see little championing these benefits of Xen (if they actually do exist).Thanks in advance.
So, it seems that what may be available on Xen is not necessarily available with XenSource's XenEnterprise product. Their support forum states that they can't do live migrations from Intel to AMD (on a side note, it also stated that live migration is a new feature for XenEnterprise, though it's been available for Xen for quite some time).
I think this point is from where my confusion stemmed. My reason for asking is that my current company is evaluating various server virtualization technologies, and VMWare is on the top of their list.
However, live migration w/o the need for a proprietary cluster filesystem and migration capability between different generations of x86 CPUs are two points I wanted to investigate. Hopefully, XenSource can catch up to what Xen is capable of doing soon, so that we can get the best of both worlds in enterprise mgmt tools and funtionality.
For a full comparison of Xen and VMware please visit www.itcomparison.com. There is a full comparison of Xen and VMware including advantages and disadvantages of both.
Enjoy
Very good comparison for a new user of Xen & VMware .Thank you so much.