Aug 10, 2007
Article after article and post after post have compared and contrasted Xen, VMWare, Veridian, and a host of other virtualization technologies, with opinions on performance, management tools, implementations, etc., etc. in abundant supply. Inevitably when it comes to Xen, the story comes full circle with some sort of declaration about "data center readiness." The definition of "ready for the data center" is quite subjective, of course, based largely on the author's personal experience, skills, and their opinion of the technical capabilities of those managing this vague "data center" to which they are referring.
Sadly, most seem to think that IT professionals managing the data center are buffoons who are somehow incapable of working with anything that doesn't include a highly refined set of GUI tools and setup wizards. Personal experience shines through when an author balks at the notion of editing a text or XML configuration file - a common task for any system administrator. Consequently, a declaration of immaturity is often the result, without regard for the performance or functionality of the technology. In the case of Xen, this is particularly prevalent, as the Xen engine and management tools are distinctly separate. In fact, there are already several dozen management and provisioning tools available and/or in-development for the highly capable Xen engine, at varying degrees of maturity.
And yet, I can't help but think that comparing features of management tools is completely missing the point. Why are we focusing on the tools, rather than the technology? Shouldn't we be asking, "where is virtualization heading" and "which of these technologies has the most long term viability?"
Where is virtualization technology heading?
To even the most passive observers it has to be obvious that virtualization is here to stay. What may not be so obvious are the trends, the first being integrated virtualization. Within a year, every major server operating system will have virtualization technology integrated at its core. Within a few short years, virtualization functionality will simply be assumed - an expected capability of every server class operating system. As it is with RHEL now, administrators will simply click on a "virtualization" checkbox at install time.
The second trend is in the technology, and that is the "virtualization aware" operating system. In other words, the operating system will know that it is being virtualized, and will be optimized to perform as such. Every major, and even most minor operating systems either have or will soon have a virtualization aware core. Performance and scalability sapping binary translation layers and dynamic recompilers will be a thing of the past, replaced by thin hypervisors and paravirtualized guests. Just look at every major Linux distro, Solaris, BSD, and even Microsoft's upcoming Veridian technology on Windows Server 2008, and you can't help but recognize the trend.
Which of these technologies has the most long term viability?
Since we now know the trends, the next logical step is to determine which technology to bet on, long term. Obviously, the current crop of technologies based on full virtualization, like KVM and VMWare (it's not a hypervisor, no matter what they say,) will be prosperous in the near term, capitalizing on the initial wave of interest and simplicity. But, considering the trends, the question should be, "will they be the best technology choice for the future?" The reality is that, in their current state and with their stated evolutionary goals, full virtualization solutions offer little long term viability, as integrated virtualization continues to evolve.
And which technology has everyone moved to? That's simple - paravirtualization on the Xen hypervisor. Solaris, Linux, several Unix variants, and, as a result of their partnership with Novell, Microsoft will all either run Xen directly or will be Xen compatible in a very short time.
Of course, those with the most market share will continue to sell their solutions as "more mature" and/or "enterprise ready" while continuing to improve their tools. Unfortunately, they will continue to lean on an outdated, albeit refined technology core. The core may continue to evolve, but the approach is fundamentally less efficient, and will therefore never achieve the performance of the more logical solution. It reminds me of the ice farmers' response to the refrigerator - rather than evolving their business, they tried to find better, more efficient ways to make ice, and ultimately went out of business because the technology simply wasn't as good.
So then, is Xen ready for the "data center?"
The simple answer is - that depends. As a long time (as these things go, anyway) user of the Xen engine in production, I can say with confidence that the engine is more than ready. All of the functionality of competing systems, and arguably more, is working and rock solid. And because the system is open, the flexibility is simply unmatched. Choose your storage or clustering scheme, upgrade to a better one when it becomes available, use whatever configuration matches your needs - without restriction. For *nix virtualization, start today.
For Windows virtualization, the answer is a bit more complex. Pending Veridian, the stop gap is to install Windows on Xen with so-called "paravirtualized drivers" for I/O. Currently, these are only available using XenSource's own XenServer line, but will soon be available on both Novell and Red Hat platforms (according to Novell press releases and direct conversations with Red Hat engineers.) While these drivers easily match the performance of fully virtualized competitors, they are not as fast as a paravirtualized guest.
Of course, you could simply choose to wait for Veridian, but I would assert that there are several advantages to going with Xen now. First, you'll already be running on Xen, so you'll be comfortable with the tools and will likely incur little, if any conversion cost when Veridian goes golden. And second, you get to take advantage of unmatched, multi-platform virtualization technology, such as native 64bit guests, and 32bit paravirtualized guests on 64bit hosts.
So what's the weak spot? Complexity and management. While the engine is solid, the management tools are distinctly separate and still evolving. Do you go with XenSource's excellent, yet more restrictive tool set, a more open platform such as Red Hat or Novell, or even a free release such as Fedora 7? That depends on your skills and intestinal fortitude, I suppose. If you are lost without wizards and a mouse, I'd say Xensource is the way to go. For the rest of us, a good review of all the available options is in order.
What about that "long term?"
So we know that virtualization aware operating systems are the future, but how might they evolve? Well, since we know that one of the key benefits of virtualization is that it makes the guest operating system hardware agnostic, and we know that virtualization aware guests on hypervisors are the future, then it seems reasonable to conclude that most server operating systems will install as a paravirtualized guest by default, even if only one guest will be run on the hardware. This will, by its very nature, create more stable servers and applications, facilitate easy to implement scalability, and offer improved performance and manageability of platforms.
As for my data center, this is how we install all our new hardware, even single task equipment - Xen goes on first, followed by the OS of choice. We get great performance and stability, along with the comfort of knowing that if we need more performance or run into any problems, we can simply move the guest operating system to new hardware with almost no down time. It's a truly liberating approach to data center management.
Sadly, most seem to think that IT professionals managing the data center are buffoons who are somehow incapable of working with anything that doesn't include a highly refined set of GUI tools and setup wizards. Personal experience shines through when an author balks at the notion of editing a text or XML configuration file - a common task for any system administrator. Consequently, a declaration of immaturity is often the result, without regard for the performance or functionality of the technology. In the case of Xen, this is particularly prevalent, as the Xen engine and management tools are distinctly separate. In fact, there are already several dozen management and provisioning tools available and/or in-development for the highly capable Xen engine, at varying degrees of maturity.
And yet, I can't help but think that comparing features of management tools is completely missing the point. Why are we focusing on the tools, rather than the technology? Shouldn't we be asking, "where is virtualization heading" and "which of these technologies has the most long term viability?"
Where is virtualization technology heading?
To even the most passive observers it has to be obvious that virtualization is here to stay. What may not be so obvious are the trends, the first being integrated virtualization. Within a year, every major server operating system will have virtualization technology integrated at its core. Within a few short years, virtualization functionality will simply be assumed - an expected capability of every server class operating system. As it is with RHEL now, administrators will simply click on a "virtualization" checkbox at install time.
The second trend is in the technology, and that is the "virtualization aware" operating system. In other words, the operating system will know that it is being virtualized, and will be optimized to perform as such. Every major, and even most minor operating systems either have or will soon have a virtualization aware core. Performance and scalability sapping binary translation layers and dynamic recompilers will be a thing of the past, replaced by thin hypervisors and paravirtualized guests. Just look at every major Linux distro, Solaris, BSD, and even Microsoft's upcoming Veridian technology on Windows Server 2008, and you can't help but recognize the trend.
Which of these technologies has the most long term viability?
Since we now know the trends, the next logical step is to determine which technology to bet on, long term. Obviously, the current crop of technologies based on full virtualization, like KVM and VMWare (it's not a hypervisor, no matter what they say,) will be prosperous in the near term, capitalizing on the initial wave of interest and simplicity. But, considering the trends, the question should be, "will they be the best technology choice for the future?" The reality is that, in their current state and with their stated evolutionary goals, full virtualization solutions offer little long term viability, as integrated virtualization continues to evolve.
And which technology has everyone moved to? That's simple - paravirtualization on the Xen hypervisor. Solaris, Linux, several Unix variants, and, as a result of their partnership with Novell, Microsoft will all either run Xen directly or will be Xen compatible in a very short time.
Of course, those with the most market share will continue to sell their solutions as "more mature" and/or "enterprise ready" while continuing to improve their tools. Unfortunately, they will continue to lean on an outdated, albeit refined technology core. The core may continue to evolve, but the approach is fundamentally less efficient, and will therefore never achieve the performance of the more logical solution. It reminds me of the ice farmers' response to the refrigerator - rather than evolving their business, they tried to find better, more efficient ways to make ice, and ultimately went out of business because the technology simply wasn't as good.
So then, is Xen ready for the "data center?"
The simple answer is - that depends. As a long time (as these things go, anyway) user of the Xen engine in production, I can say with confidence that the engine is more than ready. All of the functionality of competing systems, and arguably more, is working and rock solid. And because the system is open, the flexibility is simply unmatched. Choose your storage or clustering scheme, upgrade to a better one when it becomes available, use whatever configuration matches your needs - without restriction. For *nix virtualization, start today.
For Windows virtualization, the answer is a bit more complex. Pending Veridian, the stop gap is to install Windows on Xen with so-called "paravirtualized drivers" for I/O. Currently, these are only available using XenSource's own XenServer line, but will soon be available on both Novell and Red Hat platforms (according to Novell press releases and direct conversations with Red Hat engineers.) While these drivers easily match the performance of fully virtualized competitors, they are not as fast as a paravirtualized guest.
Of course, you could simply choose to wait for Veridian, but I would assert that there are several advantages to going with Xen now. First, you'll already be running on Xen, so you'll be comfortable with the tools and will likely incur little, if any conversion cost when Veridian goes golden. And second, you get to take advantage of unmatched, multi-platform virtualization technology, such as native 64bit guests, and 32bit paravirtualized guests on 64bit hosts.
So what's the weak spot? Complexity and management. While the engine is solid, the management tools are distinctly separate and still evolving. Do you go with XenSource's excellent, yet more restrictive tool set, a more open platform such as Red Hat or Novell, or even a free release such as Fedora 7? That depends on your skills and intestinal fortitude, I suppose. If you are lost without wizards and a mouse, I'd say Xensource is the way to go. For the rest of us, a good review of all the available options is in order.
What about that "long term?"
So we know that virtualization aware operating systems are the future, but how might they evolve? Well, since we know that one of the key benefits of virtualization is that it makes the guest operating system hardware agnostic, and we know that virtualization aware guests on hypervisors are the future, then it seems reasonable to conclude that most server operating systems will install as a paravirtualized guest by default, even if only one guest will be run on the hardware. This will, by its very nature, create more stable servers and applications, facilitate easy to implement scalability, and offer improved performance and manageability of platforms.
As for my data center, this is how we install all our new hardware, even single task equipment - Xen goes on first, followed by the OS of choice. We get great performance and stability, along with the comfort of knowing that if we need more performance or run into any problems, we can simply move the guest operating system to new hardware with almost no down time. It's a truly liberating approach to data center management.
0 comments:
Post a Comment