May 28, 2007

The One Laptop per Child project has got to be one of the most controversial topics in both education and technology. The little green wonder has been incessantly discussed and debated, at once glorified for its ideals and derided for a host of perceived shortcomings. "It doesn't run Windows" they say, (as if that's a bad thing - but I'll save that for another post,) followed by some errant declaration like, "most software runs on Windows - how can you deny these kids access to it?" On the technical side, there's the "what about technical support?" question and "who will repair them when they break?" A few of my favorites from the education arena have been, "who will teach the kids to use the machines?" and "who's going to train the teachers and provide the curriculum?" Then we step into the silly, like "what if the parents never let their kids use them?" and "what about all the bad things on the internet?" as well as ridiculous ones like "how will we protect these kids from Internet addiction?"

May 1, 2007

There is no question that virtualization has captured the attention of enterprises of all shapes and sizes. And it's easy to understand why - the benefits are simply undeniable. Who wouldn't be interested in lower TCO, better resource utilization, improved reliability, increased flexibility, and rapid deployment - among other gains?

One of the biggest newsmakers in virtualization has been the open source Xen project, and for good reason. The technology is very well designed, is extraordinarily fast and scalable, and is supported by EVERY major OS, server, and silicon vendor - even Microsoft. But as Xensource CTO Simon Crosby says, "the Xen hypervisor is an engine, and not a car. A powerful engine that needs a great gearbox, shocks, tires and the body to go with it."

And that's where the vendors and the open source community come in. There are several solutions that use Xen virtualization as a base, but add functionality to it's core capabilities through management tools and enhancements. These tools and enhancements are not required - Xen is completely functional (arguably more so) on its own - they simply make it easier to manage and use. We've been using the open source version of Xen in production since July 2006 - longer if you include testing and evaluation - so we have intimate knowledge of what the "engine" can do. In recent months, however, we have had occasion to evaluate two models of the "car."

Technology

Before you can have any sort of serious conversation about these two systems, you must first understand their underlying technologies, as well as their overall approach. At present, there are two primary approaches to virtualization: paravirtualization and full virtualization. I'll start with full virtualization, both because it came along first and because it's a little easier to understand.

In a fully virtualized system, the virtualized operating system is presented with a completely emulated machine comprised of emulated hardware devices. This "presentation layer" typically runs on top of a complete operating system, such as Windows or Linux. The presentation layer is completely consistent from virtual machine to virtual machine, regardless of the underlying hardware. So, for example, a virtual machine will always see a Realtek network card, standard SCSI controller, etc., etc. This allows drivers to be consistent from virtual machine to virtual machine, which nets you flexibility, consistency, and ease of installation (as well as stability, as was mentioned in a prior post.) Best of all, because all of the hardware is emulated, you can run virtually any operating system on it, unmodified.

There are some drawbacks to this approach, however. First, since the hardware is emulated, there is a good deal of translation taking place, which costs performance. Essentially, there are two layers of drivers translating requests between the software and the hardware. For example, lets say a software package on a virtual machine wants to send a packet out to the network interface. It sends a standard request to its operating system, which in turn forwards that request to the driver for the emulated network card. The driver then converts the request from software language to hardware language, and passes that request down the stack to the presentation layer. The presentation layer takes that hardware request for the emulated hardware, converts it back to a software request, and hands it off to the core OS running on the hardware. The core OS then hands the request to the real hardware driver, which translates it again to hardware language, and finally passes it to the actual hardware. Basically, I/O flows up and down the stack, as shown below (you'll need to view this in HTML to see the table properly):

At the Consortium of School Networking (CoSN) conference last month, I did an interview with Managing Editor Dennis Pierce regarding open technologies in K-12 and CoSN's K-12 Open Technologies initiative. The interview is titled, Who's afraid of Open Tech?