Introduction
Xen is a set of kernel extensions that allow for paravirtualization of operating systems that support these kernel extensions, allowing for near-native performance for the guest operating systems. These paravirtualized systems require a compatible kernel to be installed for it to be aware of the underlying Xen host. The Xen host itself needs to be modified in order to be able to host these systems. More information can be found at the Xen website.
Sometime in the future, XenSource will release a stable version that supports the installation of unmodified guest machine on top of the Xen host. This itself requires that the host machine hardware have some sort of virtualization technology integrated into the processor. Both Intel and AMD have their own versions of virtualization technology, VT for short, to meet this new reqirement. To distinguish between the two competing technologies, we will refer to Intel's VT as its codename, Vanderpool, and AMD's VT as Pacifica.
Installation
Before starting, it is highly recommended that you visit the Xen Documentation site. This has a more general overview of what is involved with the setup, as well as some other additional information.
Terminology
domain 0 (dom0): In terms of Xen, this is the host domain that hosts all of the guest machines. It allows for the creation and destruction of virtual machines through the use of Python-based configuration files that has information on how the machine is to be constructed. It also allows for the management of any resources that is taken up by the guest domains, i.e. networking, memory, physical space, etc.
domain U (domU): In terms of Xen, this is the guest domain, or the unpriviledged domain. The guest domain has resources assigned to it from the host domain, along with any limits that are set by the host domain. None of the physical hardware is available directly to the guest domain, instead the guest domain must go through the host interface to access the hardware.
hypervisor: Xen itself is a hypervisor, or in other words, something that is capable of running multiple operating systems. A more general definition is available here.
Prerequisites
Xen Hypervisor Requirements
A preexisting Linux installation, preferably something running 2.6. In this case, we'll be running with Redhat Enterprise Linux 4 Enterprise Server Update 3.
At least 1GB or more of RAM
40GB+ disk space available
(OPTIONAL) Multiple CPU's. Hyperthreading doesn't count in this case. The more, the better, since Xen 3.0 is capable of virtualized SMP for the guest operating system.
Guest Domain Requirements
A preexisting Linux installation, preferably something running either the same kernel version as the host-to-be or newer. More on this later in the page.
Some storage for the guest domain. An LVM-based partitioning scheme would be ideal, but you can use a file to back the storage for the machine.
Xen Hypervisor Installation Procedure
Obtain the installation tarball from XenSource Download Page. In this case, grab the one for RHEL4.
Extract the tarball to a directory with sufficient space and follow the installation instructions that are provided by XenSource. For RHEL4, it is recommended that you force the upgrade of glibc and the xen-kernel RPMs. This will be explained in detail further in the page.
Append the following to the grub.conf/menu.lst configuration file for the GRUB bootloader:
title Red Hat Enterprise Linux ES-xen (2.6.16-xen3_86)
root (hd0,0)
kernel /xen-3.0.gz dom0_mem=192M
module /vmlinuz-2.6-xen root=/dev/VolGroup00/LogVol00 ro console=tty0
module /initrd-2.6-xen.imgThis might change depending on the version that is installed, but for the most part, using just the major versions should work. Details about the parameters will be explained later in the page.
Reboot the machine with the new kernel.
The machine should now be running the Xen kernel
Guest Domain Storage Creation Procedure
LVM Backed Storage
By default, RHEL4 (and basically any new Linux distribution that uses a 2.6 kernel by default) uses the LVM (Logical Volume Manager) in order to keep track of system partitions in a logical fashion. There are two important things about LVM, the logical volume and the volume group. The volume group consists of several physical disks that are grouped together during creation, with each volume group having a unique identifier. Logical volumes are then created on these volume groups, and can be given a unique name. These logical volumes are able to grab a pool of available space on the volume group, with any specified size, properties, etc. If you wish to learn more about LVM, a visit to the LVM HOWTO on the Linux Documentation Project site is recommended.
Physical Partition Backed Storage
Far easier to create than an LVM, but with a little less flexibility, the physical partition backed storage for a guest machine just uses a system partition to store the data of the virtual machine. This partition needs to be formatted to a filesystem that is supported by the host, if you are to use the paravirtualization approach for domain creation.
File-Backed Storage
By far the easiest way to get a guest domain up and running, a file-backed store for the guest allows you to put the file anywhere where there is space. You wouldn't have to give up any extra partitions in order to create the virtual machine. But, this incurs a performance penalty.
Guest Domain Installation Procedure
Create an image tarball from the preexisting Linux installation for the guest. Use tar along these lines:
tar --exclude=/
--exclude=/sys/* --exclude=/tmp/* --exclude=/dev/* --exclude=/proc/* -czpvf / Note that the excludes are before rather than after the short flags. This is because the -f short option is positional, and thus it needs a name immediately after the option.
Move the tarball over to the Xen hypervisor machine.
Mount the desired location of the guest storage on the hypervisor.
Unpack the tarball into the guest storage partition.
Copy the modules for the Xen kernel into the guest's /lib/modules directory. You can use the following command to copy the modules directory, replacing
with the guest storage mount point: $ cp -r /lib/modules/`uname -r`/
/lib/modules/ Move the /lib/tls directory to /lib/tls.disabled for the guest. This operation is specific to Redhat-based systems. Due to the way that glibc is compiled, the guest operating system will incur a performance penalty if this is not done. Ignore this step for any non-Redhat systems.
Initial setup of the guest is completed.
Running With Xen
Creating and starting a guest domain
Create a guest configuration file under /etc/xen. Use the following example as a guideline:
kernel = "/boot/vmlinuz-2.6-xen" # The kernel to be used to boot the domU
ramdisk = "/boot/initrd-2.6.16-xenU.img" # Need the initrd, since most of these systems run udev
memory = 256 # Base memory allocation
name = "xmvm1" # Machine name
cpus = "" # Specific CPU's to assign the vm, leave blank
vcpus = 1 # Number of available CPU's to the system
vif = [ '' ] # Defines the virtual network interface
# LVM-based storage
disk = [ 'phy:VolGroup01/xenvm1-root,hda1,w', # Guest storage device mapping to the virtual machine
'phy:VolGroup01/xenvm1-swap,hda2,w' ]
root = "/dev/hda1 ro" # Root partition kernel parameterMount the guest storage partition and edit the /etc/fstab for the guest to reflect any changes made to the configuration file. Remove any extraneous mount points that won't be recognized by the guest when the system is started, otherwise the guest machine will not boot.
Start the maching using the following command:
$ xm create -c
This will create the machine and attach it to a virtual console. You can detach from the console using CTRL-].
Further setup is still required, but it is OS-specific. The network interfaces will need to be setup for the guest machine.
3 comments:
Network interface could be made os indipendent if you use DHCP. With xen you can force the MAC address of DomUs so you can configure dhcpd to provide IP to your VMs easily
It was a good blog. A well researched one. Keep us informed all the time.
Thanks
From my experience, using a single file as opposed to LVM was much more conceptually clearer.
Also, using a pre-built image is easier.
The hardest part for me was getting the network working, ie talking to the outside world. Opensuse had some bad settings in the xend scripts I had to change. Plus, I had to define the IP needed both in the domU conf, and inside the domU itself.
Post a Comment