Getting started with Xen

Some basic Xen pieces that might save you some trouble

Getting started with Xen on a Debian system

Alright, you wanna try out xen? Its really easy on Lenny, just follow the below steps to get started!

Prepare your system

First you need a few packages, a special kernel, and some minor tweaks.

  1. install a debian xen kernel and support packages: xen-linux-system-2.6.26-2-xen-686 (check for a newer version), xen-shell
  2. install the xen-tools from backports.org, you need at least version 4.1-1~bpo50+1. Be sure to follow the instructions on getting packages from backports
  3. if you are using a serial console, then modify your grub menu.list so you have these options set (make sure your kopt only has console=tty0 if anything, no serial output here). Also its a good idea to restrict your dom0’s memory and vcpus. Once you do this, after you reboot, the dom0 is only going to show the limited memory and cpus, but they are available for your domUs:
     
    # xenhopt=dom0_mem=524288 dom0_max_vcpus=1 dom0_vcpus_pin=true loglvl=all guest_loglvl=all com1=115200,8n1 console=com1,vga 
    # xenkopt=console=hvc0 clocksource=jiffies earlyprintk=xen nomodeset
    
  4. run update-grub
  5. modify /etc/inittab to have: 1:2345:respawn:/sbin/getty 38400 hvc0
  6. reboot
  7. make sure /etc/xen-tools/xen-tools.conf has serial_device = hvc0. NOTE: although the config file says that this is the default, if you do not set this, then your inittab inside the domU will not be setup correctly, and you will need to mount the domU root in the dom0 (while it is shutdown!) and modify /etc/inittab to have: 1:2345:respawn:/sbin/getty 38400 hvc0
  8. edit /etc/xen/xend-config.sxp and make sure that these are un-commented:
    (dom0-min-mem 512)
    (enable-dom0-ballooning no)
    (vif-script vif-bridge) and (network-script network-bridge)
  9. /etc/init.d/xend restart

Build your first Xen DomU

What is a DomU? Well, its just your virtualized ‘guest’, ‘instance’ or ‘image’, its a complete, enclosed system. The Dom0 is what your host is called.

  1. now you need to create a xen image, if you are using lvm, this is easy, because you can specify an existing volume group with space: xen-create-image --hostname=somehost.domain.org --ip=x.x.x.x --dist=lenny -lvm=vg_existing0 --role=udev
  2. if you wanted to specify a different role, rather than udev, then you should include: --debootstrap='--include=udev' otherwise you will be missing the console device (hvc0) in inittab in the domU and without udev, logging in via ssh wont work (a missing /dev/pts). This makes logging in via xm console and via ssh impossible, because getty doesn’t have a proper console to attach to and ssh can’t attach to a pseudo terminal.
  3. once this is finished, you can do: xm console <domU name> or ssh to root@the IP you provided.

Migrating a host to a Xen domU

(thanks to stefani for the recipe!)

  1. mount (not-live/running) filesystem of system you want to virtualize, for example:
    # mount /dev/sda2 /mnt
    
  2. use xen-create-image to build your DomU, based on the mounted filesystem:
    xen-create-image  --size=6Gb \
    --hostname=my.hostname \
    --ip=x.y.z.a \
    -lvm=xenguests \
    --role=udev \
    --install-method=copy \
    --install-source=/mnt
    
  3. after installation, mount the new domU:
    mount /dev/mapper/xenguests--my.hostname--disk   /somewhere
    
  4. then edit the interfaces file, as the default has the eth0 interface set, and that probably wont be set for your system. you may need to add gateway and netmask to the interfaces file also … i did.
    # cd /somewhere/etc/network
    <hack>
    

Migrating a Linux-Vserver to a Xen domU

Migrating a vserver to a xen domU is easy; very much like migrating a live system (above). Note that the purpose of these instructions is to migrate a Linux-Vserver to instead be a domU. If you want to migrate Linux-Vservers inside a domU running a Linux-Vserver kernel see the next section.

  1. build a generic xen domU
    xen-create-image  --size=8Gb --hostname=bongo --ip=192.168.1.10 -lvm=xenguests --role=udev --dist=lenny --swap=256 --memory=512
    
  2. mount the domU disk (eg. mount /dev/mapper/xenguests—bongo—disk /somewhere
  3. rsync the vserver to the mounted xen domU rsync -al /path/vserver/{bin,etc,…/var}
    – note you do not need to rsync /dev /proc and likely not /mnt /media /opt /selinux /srv /sys
    cd /path/to/vserver/oldbongo
    for i in bin boot etc home lib lib64 opt root sbin tmp srv usr var; do
        rsync -al $i/   /somewhere/$i
    done
    
  4. check configuration files in /etc /etc/network/interfaces , hack where necessary
  5. umount the xen domU

Running Linux-Vservers inside of a Xen domU

You can run Linux-Vservers inside of a Xen domU, and there are reasonable reasons to do so. It might seem strange to have nested virtualization like this, but it works and lets you have the best of both worlds (this would give you live migration of Linux-Vserver guests, for example).

To do this on Debian, you need to create a domU, it will need to have a Linux-Vserver kernel installed in it. For Debian Lenny, you will need to install the linux-image-2.6.26-2-vserver-686-bigmem kernel, as it is configured to run inside of a domU:

  1. Create the domU, but don’t start it
  2. mount its filesystem (eg. mount /dev/mapper/vg_volgrp-hostname—disk /mnt)
  3. mount -t proc proc /mnt/proc
  4. mount -t devpts devpts /mnt/dev/pts
  5. chroot /mnt
  6. and edit the /etc/fstab to change its devices from /dev/sd* to /dev/xvda (so /dev/sda1 should be changed to /dev/xvda1,
  7. apt-get install linux-image-2.6.26-2-vserver-686-bigmem
  8. exit
  9. cp /mnt/boot/vmlinuz-2.6.26-2-vserver-686-bigmem /boot/vmlinuz-2.6.26-2-vserver-686-bigmem-domU
  10. umount /mnt/dev/pts; umount /mnt/proc; umount /mnt
  11. edit /etc/xen/hostname.cfg and change the kernel and initrd lines:
    kernel      = '/boot/vmlinuz-2.6.26-2-vserver-686-bigmem'
    ramdisk     = '/boot/initrd.img-2.6.26-2-vserver-686-bigmem-domU'
    
  12. change the root and disk device lines to not use sda:
    root        = '/dev/xvda2 ro'
    disk        = [
                      'phy:/dev/vg_finch1/canary.riseup.net-disk,xvda2,w',
                      'phy:/dev/vg_finch1/canary.riseup.net-swap,xvda1,w',
                  ]
    
  13. consider increasing the number of vcpus, and memory, and then start it up
  14. now you can start the domU and create/run Linux-Vserver guests in it!

Deleting a Xen DomU

  • xen-delete-image --hostname foo.riseup.net --lvm=vg_bar0
  • if you setup any additional lvol’s for the host you will need to delete them by hand with lvchange -an vg_foo0/bar_mysql; lvremove vg_foo0/bar_mysql

Some other useful stuff, a FAQ of sorts

xm shell

xm shell will get you a shell where you can console in, start and stop domUs, or look at the resource usage

the xen console

The xen console, which you can use to access a xen domU (xm console domU), can be detached by using the old telnet-style escape sequence: control-]

encrypted root dom0

Q. My host has an encrypted root partition, when I start up the xen domU, it never finishes booting because it asks for the encrypted passphrase on boot (even though it doesn’t need to!
A. That is because it is using your host’s initrd, you need to generate a new one just for the xen domU, that doesn’t depend on a cryptoroot setup. Alternatively, you can use pygrub, so that the initrd is handled in the domU itself, see the next item on how to set that up. To do that, follow these steps:

# stop the domU
# mount /dev/mapper/<logical volume where you built the domU> /mnt
# chroot /mnt
# aptitude install initramfs-tools
# cd /boot
# mkinitramfs -v -o initrd.img-`uname -r`-domU
# exit 
# cp /mnt/boot/initrd.img-blahblabhah /boot

Now you just need to edit your /etc/xen/ and set the ramdisk line to this newly generated initrd:

ramdisk     = '/boot/initrd.img-2.6.26-1-xen-686-domU'

Once you’ve done this once, you can create new xen domU’s with this initrd automatically by passing the --initrd argument to xen-create-image. Note that you should probably regenerate this initrd when new kernels come out. If someone has scripted this, I would like to know!

Using pygrub

PyGrub lets you start domUs with the kernels that are inside the filesystem of the domU, instead of using the ones from the dom0. This is useful because it makes kernel updates in domUs much easier, and also provides the domU with additional control over their environment.

To get this to work, you have to first change the domU configuration (which is located on the dom0 in /etc/xen) so that the disk entries are configured with the xvd driver, and the root partition should be listed as the first partition in the disk list. You will also want to comment out the kernel/initrd lines, and add the bootloader line:

#kernel      = '/boot/vmlinuz-2.6.26-2-xen-686'
#ramdisk     = '/boot/initrd.img-2.6.26-2-xen-686-domU'

bootloader      = '/usr/lib/xen-default/bin/pygrub'

root        = '/dev/xvda2 ro'
disk        = [
                  'phy:/dev/vg02/my_domU-disk,xvda2,w',
                  'phy:/dev/vg02/my_domU-swap,xvdb1,w'
              ]

On the domU itself, you should edit /etc/fstab to make it point to the xvd devices you configured above. You should then prepare for grub installation and install a kernel.

This is a little tricky if you just changed the above disk stanzas from sda to xvda, but have not restarted the domU, but it can be done. If you have already restarted your domU so it booted off of the xvda device, then you can ignore the sda bits below.

# mkdir /boot/grub
# echo "(hd0)   /dev/sda" > /boot/grub/device.map
# cd /dev
# mknod sda b 202 0
# mknod xvda b 202 0
# aptitude install linux-image-2.6-xen-686 grub
# echo "(hd0)   /dev/xvda" > /boot/grub/device.map
# $EDITOR /boot/grub/menu.cfg and change root=/dev/sda2 to root=/dev/xvda2, be sure to change it on the kernel lines too. You cannot run update-grub until you have restarted the domU so it is booting using the root device as xvda.

Now, check to make sure that grub was properly installed on the domU by doing this:

# /usr/lib/xen-default/bin/pygrub /dev/vg02/my_domU-disk

You should get a grub menu, and then when the timer counts down, it will abort. Now, when you ‘xm create’ the domain, grub will be used to boot it. Once it boots up, you should be able to run ‘update-grub’ on the xvda device.

Note: If you are using Debian Squeeze, you can use Grub2 (it requires xen-utils xen-utils >= 3.4.3on the dom0).

Redmine on a xen domU

When creating the xen instance, set the memory to be at least 512 MB, otherwise you’ll be restarting apache, or the domU to free memory, often

xen-create-image  --size=8Gb --hostname=redmine --ip=192.168.1.1 -lvm=xendomu --role=udev --dist=lenny --swap=256 --memory=512

starting a domU on boot

To get a domU to start on boot, you simply need to make sure that the /etc/xen/auto directory exists and then do this:

ln -s /etc/xen/domain_config_file /etc/xen/auto/domain_config_file

iret exception on Debian Lenny

I was having problems with multiple vCPUs regularly causing the Dom0 to crash, usually with an ‘iret exception’. I reported my findings in Debian bug #504805.

Eventually I stumbled on a way to keep my machines from restarting, its not a great solution, but it stops me from having to deal with the failure on a daily basis. I think that anyone else who is having this problem can do this and it will work.

First I made sure this was set:

/etc/xen/xend-config.sxp: (dom0-cpus 0) 

Then I pinned individual physical CPUs to specific domU’s, once pinned, the problem stops.

What does that mean? Well, Xen does this wacky thing where it creates virtual CPUs (VCPUs), each domU has one of them by default (but you can have more), and then it moves physical CPUs between those VCPUs depending on need.

So lets say you have four CPUs, and a domU. That domU has one VCPU by default. That VCPU could actually have the physical CPU 0, 1, 2, 3 all servicing it to provide that VCPU, even at the same time. I found somewhere that this can be a performance hit, because it needs to figure out how to deal with this and switch contexts. I also read that it could cause some instability (!), so pinning the physical CPUs so they don’t move around seemed to solve this.

The pinning does not stick across reboots, so it has to be done again if the system is rebooted, and it isn’t really possible to set this in a startup script, at least I don’t think so.

So how do you do this? If you look at ‘xm vcpu-list’ (which annoyingly isn’t listed in ‘xm help’) you will see the CPU column populated with a random CPU, depending on scheduling, and then the CPU Affinity column all say ‘any cpu’. This means that any physical CPU could travel between them, and would, depending on the scheduling. Once you pin things, then the individual domU’s are set to have specific CPU affinities, so the CPUs don’t ‘travel’ between them, and magically the crash stops.

So an example:

root@shoveler:~# xm vcpu-list
Name	       ID  VCPU   CPU State    Time(s) CPU Affinity
Domain-0        0     0     1   -b-  283688.8 	any cpu
Domain-0        0     1     1   ---   39666.3 	any cpu
Domain-0        0     2     1   r--   49224.4 	any cpu
Domain-0        0     3     1   -b-   75591.1 	any cpu
kite            1     0     3   -b-   71411.8 	any cpu
murrelet        2     0     0   -b-  472222.2 	any cpu
test            3     0     0   r--  342182.3 	any cpu

So we want to fix that final column using ‘xm vcpu-pin’:

root@shoveler:~# xm vcpu-pin 0 0 0
root@shoveler:~# xm vcpu-pin 0 1 0
root@shoveler:~# xm vcpu-pin 0 2 0
root@shoveler:~# xm vcpu-pin 0 3 0
root@shoveler:~# xm vcpu-pin 1 0 1
root@shoveler:~# xm vcpu-pin 2 0 2
root@shoveler:~# xm vcpu-pin 3 0 3
root@shoveler:~# xm vcpu-list
Name	       ID  VCPU   CPU State    Time(s) CPU Affinity
Domain-0        0     0     1   -b-  283688.8 	0
Domain-0        0     1     1   ---   39666.3 	0
Domain-0        0     2     1   r--   49224.4 	0
Domain-0        0     3     1   -b-   75591.1 	0
kite            1     0     3   -b-   71411.8 	1
murrelet        2     0     0   -b-  472222.2 	2
test            3     0     0   r--  342182.3 	3

After these are set, no more crashes. I believe that these can be set in /etc/xen/.cfg by specifying a cpu line, like this:

cpu = 1

Some good resources

  1. the debian xen wiki page
  2. the xen support pages (search for your problem)
  3. the xen FAQ
  4. mnaumann’s weblog