Xen Virtualization and Cloud Computing #01: Introduction

The Xen Project is a free and open source hypervisor that enables a computer to run multiple operating systems simultaneously on the same hardware. This article begins a series that covers the way Xen achieves this result efficiently, important features, and ways in which the Xen Project is supporting new advances in virtualization.

Xen forms the key infrastructure for many Internet hosting service companies and cloud providers, which rely on Xen’s secure isolation of users and efficient resource sharing. Private companies also use Xen to divide up resources on their servers among their internal users. Today, big companies such as Amazon, AMD, Bromium, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon are using Xen in products and services including Citrix Hypervisor, XCP-ng, Oracle VM Server, IBM Cloud, and Amazon EC2. Qubes OS, a security-focused desktop OS, enforces isolation via the Xen hypervisor.

The Xen Project was started by Ian Pratt at the University of Cambridge and is one of the most successful projects at this university. The first version of Xen was released in 2003. Soon after, Ian Pratt with the help of other Cambridge friends launched the company XenSource Inc to bring Xen to the enterprise market. The company was acquired by Citrix in October 2007, turning the page on a new era for Xen. The Citrix company produced Xen Enterprise 3.0 in August 2006, based on version 3.0.0 of the Xen hypervisor. The company open sourced Citrix XenServer (Citrix Hypervisor) in 2013. Finally, on April 15, 2013, the Linux Foundation included the Xen hypervisor in its umbrella support and named it the Xen Project.

Xen Panda Mascot

Virtualization has gone through several stages in industry acceptance. Before the first VMware offering appeared in 1998, people used virtualization primarily on desktop systems, to do something like running Windows on their Linux box. VMware promoted a new and radically more powerful use case: server consolidation. Before VMware, every server ran on its own physical computer. A single hardware server could run multiple server applications (for instance, a database, a web server, and a mail server), but this reduced performance. VMware promoted the idea of buying one larger computer and running several virtual computers inside of it.

In 2002, there was no hardware support for virtualization (HVM) on x86. The state of the art for virtualization was called “binary translation.” This was incredibly complicated, not very fast, and very difficult to get right. There were no competitive open-source implementations, so if you wanted virtualization on x86, an expensive VMWare license was your only option. Traditional virtualization, like VMWare, emulated a real computer. The effect was that you had a piece of software (the operating system) talking to a piece of software (the hypervisor) over an interface designed for hardware.

It was in this environment that Xen was conceived in 2002. The core idea was the concept of paravirtualization, described later in this series.

Virtualization really grabbed the attention of the computer field with Amazon Web Services (AWS). From the beginning, AWS ran on Xen. The business of offering virtualization in the cloud is called Infrastructure as a Service (IaaS).
 

What is a hypervisor?

IBM invented the hypervisor in the 1960s for its mainframe computers. A hypervisor or virtual machine monitor (VMM) is software or hardware that creates and runs virtual machines. Virtual machines act just like independent, stand-alone processors and appear to be independent processors to the user, but actually share a chip with other virtual machines. Each VM interacts with outside world in its usual way, issuing calls and control instructions to hardware and network devices, memory, and CPUs. But behind the scenes, the hypervisor intercepts all these calls and instructions. The hypervisor carries them out in a way that prevents them from interfering with other VMs, and that respects the resource needs of each VM.

Although the concept of virtual machines has suddenly become popular in the past decade, IBM invented it in the 1960s for its mainframe computers. Nowadays, some hypervisors are embedded into custom devices.

When in use, the hypervisor is called a host machine and each VM managed by this host is called a guest machine. The hypervisor shares the system resources between the VMs while keeping them isolated, so that no user can accidentally or maliciously see or change another user’s data. With the help of the hypervisor, a system can run multiple operating systems at once and use the system resources in an efficient way.

Two types of hypervisor exist, called simply type-1 and type-2. The type-1 hypervisor, also known as native or bare-metal, runs directly on the hardware and control the resources and manage guest VMs. Type-1 hypervisors needs their own drivers to interact with the particular hardware they run on. At the time of writing this article, modern and popular type-1 hypervisors include Xen Project, XCP-ng, Citrix Hypervisor (formerly know as XenServer), Microsoft Hyper-V, and VMware ESXi.

The type-2 hypervisor is a computer program that needs an operating system to work. This program acts as an interface between the operating system and guest VMs, and shares resources between them. The type-2 hypervisor represents each VM as a process to the underlying operating system. Type-2 hypervisors use the drivers supplied by the host OS. At the time of writing this article, popular type-2 hypervisors include Oracle VirtualBox, VMware Workstation Pro and Player, VMware Fusion, Parallels Desktop, FreeBSD bhyve, and KVM.

The difference between type-1 and type-2 hypervisors
The difference between type-1 and type-2 hypervisors

 

Type-1 and type-2 hypervisors have different pros and cons:

  • The pros of type-1 hypervisors lie in performance and security. It offerrs high performance because the hypervisor has direct access to the hardware. Security is also more reliable on type-1 than type-2, because there is no interface between the hypervisor and CPU.
     
  • The main con of type-1 hypervisors is that GUI management of the VMs requires a separate machine. For example, after installing XCP-ng on a machine, it is is dedicated to XCP-ng and cannot run a shell or desktop alongside it. The result is that you need another machine to connect to XCP-ng and create and manage your VMs. In contrast, many hypervisors like Xen  and Microsoft Hyper-V let you run another operating system next to the hypervisor on the same machine. Xen even allows a parallel desktop environment, which is possible but not recommended with Microsoft Hyper-V because of potential vulnerabilities. All these hypervisors, though, can be managed from the command line.
     
  • The pro of type-2 hypervisor is simplicity of management. First, you don’t need to install additional software to manage the virtual machines running on type-2 hypervisors. This trait makes type-2 virtualization attractive in development environments. You can run and test on multiple operating systems simultaneously without knowing a lot about virtualization. This does not mean that the type-1 hypervisors are inappropriate for the development environments, just that some users find type-2 hypervisors easier.
     
  • The cons of type-2 hypervisor spring from its need to run on another operating system to access the hardware resources such as memory, devices, and networking. Thus, performance is inferior to type-1 hypervisors, and security is potentially weaker because an attacker who compromises the host OS can gain access to all the VMs that running on the host.

 

Why Virtualization?

Virtualization can bring many benefits to your organization and give it new power and capacity. The technology has become widespread and has been extensively discussed in the trade press, but I’ll highlight some of the key benefits that apply to Xen.:

Reduction in costs

Virtualization can reduce the costs of your IT infrastructure. In a non-virtualized environment, each service gets a dedicated physical server. Sharing a computer system among multiple services has a high risk. But today’s hardware is very powerful, so dedicating a server for one service or application just wastes resources. A virtualized environment lets a single physical server uses host many VMs safely. Each of these VMs can run a different operating system and offer different applications. Fewer physical servers mean lower costs, lower energy use, and less physical space.

Reduce downtime and faster recovery

For your customers, nothing is more painful than a service outage. When a disaster affects a physical server, IT staff must scramble to replace or fix it. Depending on the crisis, this could take hours or even days. In a virtualized environment, you can easiluy clone the virtual machines that have been affected in mere minutes.

Creativity

Why waste your IT team’s time on maintaining a lot of physical servers? VMs can be installed, updated, and maintained the with a few clicks. Your IT team can spend their time on other things, such as learning and implementing new technologies.

Control

Virtualization gives you more control over the development process. Consider a new update for an operating system or an application. You want to test this update to ensure that no problem happens. Clone the VM, apply the updates, and test it. If no problem appear, then apply the updates to the main environment.

Help the Earth’s environment

When you cut down on the number of physical servers in your company, it will lead to reduction the amount of power being consumed. Fewer servers allows a smaller carbon footprint and less electronic junk.

The next component in this series explains how Xen is designed and how it offers efficient virtualization on a variety of platforms.

Read the next post

About Mohsen Mostafa Jokar:

Mohsen Mostafa Jokar is a Linux administrator and a virtualization engineer. His interest in virtualization goes back to school days, when he saw Microsoft Virtual PC for the first time. He installed it on a PC with 256 MB of RAM and used it for Virtualize Windows 98 and DOS. After that, Mohsen became interested in virtualization and got acquainted with more products. Along with virtualization, Mohsen became acquainted with GNU/Linux. He installed LindowsOS as his first Linux distro, later becoming familiar with Fedora Core, Knoppix, RedHat, and other distributions. Using Linux OS, he got acquainted with bochs, but found it too slow, and after some research discovered Qemu. Qemu was faster than bochs, and installing the KQEMU module allowed him to do virtualization even faster. After Qemu, Mohsen got acquainted with Innotek VirtualBox and chose it as his main virtualization application. Innotek VirtualBox had a good GUI and was easy to use. Ultimately, Mohsen got acquainted with Xen, which he loves because it is strong, stable, and reliable. He has written a book about Xen with the name "Hello Xen Project" and made it available on the Xen wiki. He made it free in order to help make Xen more friendly and encourage beginners to use it as their first virtualization platform. He considers himself a "Xen Soldier". "

One response to “Xen Virtualization and Cloud Computing #01: Introduction”

  1. Avatar photo Sérgio says:

    Very good article.
    Virtualization is life!
    I implemented it in 2009 and everything changed after that. The number of possibilities that arose were almost infinite.

    It is difficult to imagine any infrastructure that does not use virtualization and I wonder what will come next.

Leave a Reply

Your email address will not be published. Required fields are marked *