The Xen Project is a free and open source hypervisor that enables a computer to run multiple operating systems simultaneously on the same hardware. This article begins a series that covers the way Xen achieves this result efficiently, important features, and ways in which the Xen Project is supporting new advances in virtualization.
Xen forms the key infrastructure for many Internet hosting service companies and cloud providers, which rely on Xen’s secure isolation of users and efficient resource sharing. Private companies also use Xen to divide up resources on their servers among their internal users. Today, big companies such as Amazon, AMD, Bromium, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon are using Xen in products and services including Citrix Hypervisor, XCP-ng, Oracle VM Server, IBM Cloud, and Amazon EC2. Qubes OS, a security-focused desktop OS, enforces isolation via the Xen hypervisor.
The Xen Project was started by Ian Pratt at the University of Cambridge and is one of the most successful projects at this university. The first version of Xen was released in 2003. Soon after, Ian Pratt with the help of other Cambridge friends launched the company XenSource Inc to bring Xen to the enterprise market. The company was acquired by Citrix in October 2007, turning the page on a new era for Xen. The Citrix company produced Xen Enterprise 3.0 in August 2006, based on version 3.0.0 of the Xen hypervisor. The company open sourced Citrix XenServer (Citrix Hypervisor) in 2013. Finally, on April 15, 2013, the Linux Foundation included the Xen hypervisor in its umbrella support and named it the Xen Project.
Virtualization has gone through several stages in industry acceptance. Before the first VMware offering appeared in 1998, people used virtualization primarily on desktop systems, to do something like running Windows on their Linux box. VMware promoted a new and radically more powerful use case: server consolidation. Before VMware, every server ran on its own physical computer. A single hardware server could run multiple server applications (for instance, a database, a web server, and a mail server), but this reduced performance. VMware promoted the idea of buying one larger computer and running several virtual computers inside of it.
In 2002, there was no hardware support for virtualization (HVM) on x86. The state of the art for virtualization was called «binary translation.» This was incredibly complicated, not very fast, and very difficult to get right. There were no competitive open-source implementations, so if you wanted virtualization on x86, an expensive VMWare license was your only option. Traditional virtualization, like VMWare, emulated a real computer. The effect was that you had a piece of software (the operating system) talking to a piece of software (the hypervisor) over an interface designed for hardware.
It was in this environment that Xen was conceived in 2002. The core idea was the concept of paravirtualization, described later in this series.
Virtualization really grabbed the attention of the computer field with Amazon Web Services (AWS). From the beginning, AWS ran on Xen. The business of offering virtualization in the cloud is called Infrastructure as a Service (IaaS).
IBM invented the hypervisor in the 1960s for its mainframe computers. A hypervisor or virtual machine monitor (VMM) is software or hardware that creates and runs virtual machines. Virtual machines act just like independent, stand-alone processors and appear to be independent processors to the user, but actually share a chip with other virtual machines. Each VM interacts with outside world in its usual way, issuing calls and control instructions to hardware and network devices, memory, and CPUs. But behind the scenes, the hypervisor intercepts all these calls and instructions. The hypervisor carries them out in a way that prevents them from interfering with other VMs, and that respects the resource needs of each VM.
Although the concept of virtual machines has suddenly become popular in the past decade, IBM invented it in the 1960s for its mainframe computers. Nowadays, some hypervisors are embedded into custom devices.
When in use, the hypervisor is called a host machine and each VM managed by this host is called a guest machine. The hypervisor shares the system resources between the VMs while keeping them isolated, so that no user can accidentally or maliciously see or change another user’s data. With the help of the hypervisor, a system can run multiple operating systems at once and use the system resources in an efficient way.
Two types of hypervisor exist, called simply type-1 and type-2. The type-1 hypervisor, also known as native or bare-metal, runs directly on the hardware and control the resources and manage guest VMs. Type-1 hypervisors needs their own drivers to interact with the particular hardware they run on. At the time of writing this article, modern and popular type-1 hypervisors include Xen Project, XCP-ng, Citrix Hypervisor (formerly know as XenServer), Microsoft Hyper-V, and VMware ESXi.
The type-2 hypervisor is a computer program that needs an operating system to work. This program acts as an interface between the operating system and guest VMs, and shares resources between them. The type-2 hypervisor represents each VM as a process to the underlying operating system. Type-2 hypervisors use the drivers supplied by the host OS. At the time of writing this article, popular type-2 hypervisors include Oracle VirtualBox, VMware Workstation Pro and Player, VMware Fusion, Parallels Desktop, FreeBSD bhyve, and KVM.
Virtualization can bring many benefits to your organization and give it new power and capacity. The technology has become widespread and has been extensively discussed in the trade press, but I’ll highlight some of the key benefits that apply to Xen.:
Virtualization can reduce the costs of your IT infrastructure. In a non-virtualized environment, each service gets a dedicated physical server. Sharing a computer system among multiple services has a high risk. But today’s hardware is very powerful, so dedicating a server for one service or application just wastes resources. A virtualized environment lets a single physical server uses host many VMs safely. Each of these VMs can run a different operating system and offer different applications. Fewer physical servers mean lower costs, lower energy use, and less physical space.
For your customers, nothing is more painful than a service outage. When a disaster affects a physical server, IT staff must scramble to replace or fix it. Depending on the crisis, this could take hours or even days. In a virtualized environment, you can easiluy clone the virtual machines that have been affected in mere minutes.
Why waste your IT team’s time on maintaining a lot of physical servers? VMs can be installed, updated, and maintained the with a few clicks. Your IT team can spend their time on other things, such as learning and implementing new technologies.
Virtualization gives you more control over the development process. Consider a new update for an operating system or an application. You want to test this update to ensure that no problem happens. Clone the VM, apply the updates, and test it. If no problem appear, then apply the updates to the main environment.
When you cut down on the number of physical servers in your company, it will lead to reduction the amount of power being consumed. Fewer servers allows a smaller carbon footprint and less electronic junk.
The next component in this series explains how Xen is designed and how it offers efficient virtualization on a variety of platforms.