Brandon Konkle
Brandon Konkle

Principal Engineer, type system nerd, Rust enthusiast, supporter of social justice, loving husband & father, avid comic & manga reader, 日本語を勉強してる。

I’m a Software Architect with more than 15 years of experience creating high performance server and front-end applications targeting web and mobile platforms, & today I lead a team at Formidable Labs.

Share


Tags


LVM-Based Virtualization with KVM and Jaunty

A month ago, I had three tower PCs running in my home office 24x7 - a desktop PC, a web server, and a home media server. Routinely high electric bills prompted me to make the decision to combine the two servers into one, but I wanted to do everything possible to isolate the media server activity from the web server. I decided on virtualization to accomplish this, allowing the web and media servers to run as separate virtual machines on the same hardware.

I liked the idea of virtualization, because it allowed me to install the bulky multimedia conversion packages that I needed for my media server, while keeping my web server streamlined and focused. Additionally, if I switch hardware later on, I can simply set up the core virtualization system and then copy my VMs over without having to reinstall and reconfigure my entire stack for both servrs. After deciding on virtualization, I went out to my local Frys and picked up a low-power AMD 64-bit CPU to save further energy. One important point before you begin - your CPU must support hardware virtualization in order to use KVM.

I reviewed several sources of information while completing this project, and I found this guide at the Ubuntu Community Documentation site, this guide on the HowtoForge site, and this forum thread very helpful. Below is a quick rundown of the method I used after extensive trial & error to get this working on my setup. I went through a lot of grief because of my decision to pursue LVM-based virtual machines instead of simple disk images, but I think the performance gain and optimization of disk I/O was worth it. Hopefully I'll be able to save you some of that grief.

I started out with a fresh Jaunty install with the openssh-server package installed, and I used an SSH console to log into my machine remotely. I used the command below to make sure that my CPU supports hardware virtualization, checking to make sure I got some output:

egrep '(vmx|svm)' --color=always /proc/cpuinfo

VMX is Intel-based hardware virtualization, and SVM is AMD's technology. If nothing is shown when you run the command on your machine, then you are out of luck.

Next, I installed the ubuntu-virt-server package.

sudo apt-get install ubuntu-virt-server

Then, I added myself to the libvirtd group.

sudo adduser myusername libvirtd

Next, I needed to set up network bridging. I installed the bridge-utils package.

sudo apt-get install bridge-utils

Then, I reconfigured my /etc/network/interfaces file.

sudo nano /etc/network/interfaces

I manage my static IP address through the DHCP server on my router, so I kept DHCP as the method of IP discover for my network. My interfaces file looked like this when I was done:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
        bridge_ports eth0
        bridge_fd 9
        bridge_hello 2
        bridge_maxage 12
        bridge_stp off

Since I was remotely connecting to my machine via SSH, I went ahead and instructed the machine to reboot to make sure that the connection was set up correctly. This shouldn't be necessary, you can also use sudo invoke-rc.d networking stop and sudo invoke-rc.d networking start (this is preferred over a networking restart if you are switching from DHCP to static), but I decided to go ahead and reboot anyway to make sure everything was configured correctly.

sudo reboot

The next step I took was to create a minimal image-based VM that I later converted into an LVM-based VM. Every attempt I made at skipping the image-based VM and going with LVM from the start failed. I had a large chunk of space reserved for LVM, but I didn't have the logical volumes that I was planning use for the VMs created yet. I followed Falko's recommendation on HowtoForge, and created a set of directories in my home folder that I copied the vmbuilder template to.

mkdir -p ~/vm1/templates/libvirt
cp /etc/vmbuilder/libvirt/* ~/vm1/templates/libvirt/

Then I altered the template to enable bridged networking.

nano ~/vm1/mytemplates/libvirt/libvirtxml.tmpl

I changed <interface type='network'> to <interface type='bridge'> and <source network='default'/> to <source bridge='br0'/>.

I wanted an SSH server on my VMs, but I found that if I used the --addpkg argument on the vmbuilder command, my VMs wouldn't get unique SSH keys. Instead, I used a script that is run at the first boot.

nano ~/vm1/boot.sh

apt-get update
apt-get install -qqy --force-yes openssh-server

The first time I attempted this, I created a VM with multiple partitions matching the layout and sizes that I intended for the machine once it was up and running. I had a root partition, swap space, and a 500GB partition that I planned to use for my web applications, Mercurial repositories, etc. I created the VM with these specifications, and when I went to convert it from image-based to LVM, I found that the process took way too long. 12 hours into the conversion, I finally gave up and decided to go with a minimal VM at first which I would expand with fdisk after it was up and running.

I kept my root at 12 GB, and my swap space at 1.5 times the amount of physical memory I planned to allocate to the VM. I used the following command to create my VM, which you will want to tweak for your setup:

sudo vmbuilder kvm ubuntu --suite=jaunty --flavour=virtual --arch=amd64 -o --libvirt=qemu:///system --tmpfs=- --templates=templates --user=myusername --name="My User Name" --pass=mypassword --addpkg=acpid --firstboot=boot.sh --mem=512 --hostname=vm1 --rootsize=15000 --swapsize=1250

The next step was to convert the disk image to a raw image, which can later be copied to the LVM with the dd command.

cd ~/vm1/ubuntu-kvm/

qemu-img convert disk0.qcow2 -O raw disk0.raw

Then, I was ready to create the logical volume that would house the VM. I used the full 500GB that I was planning to allocate to the VM. I already had a volume group that is called vg01 in this example. The logical volume in this example is called vm1.

lvcreate -n vm1 -L500G vg01

Then, I used dd to copy the raw image to the new logical volume.

dd if=disk0.raw of=/dev/vg01/vm1 bs=1M

Once that was complete, I needed to change the location of the VM that libvirt was looking for.

cd /etc/libvirt/qemu

nano vm1.xml

I changed the disk tag to look like this:

<disk type='block' device='disk'>
  <source dev='/dev/vg01/vm1'/>
  <target dev='hda' bus='ide'/>
</disk>

Since we've changed the configuration xml, we now need to instruct KVM to reload it.

sudo virsh --connect qemu:///system

define /etc/libvirt/qemu/vm1.xml

I moved my .qcow2 and .raw images out of the ubuntu-kvm folder to make sure that they weren't being used by KVM.

Next, I needed to set up a management interface. I'm running Ubuntu Jaunty on my desktop, so I simply installed the virt-manager package. Once installed, you can access it under Applications-->System Tools-->Virtual Machine Manager. Upon launch, click File-->Add Connection. Choose QEMU as the hypervisor, and Remote Tunnel over SSH as the Connection. Enter the hostname of your physical host machine, and connect. Your new virtual machine should show up underneath your physical machine. You can double-click it to connect using QEMU and see the console output.

If everything has gone well, your machines should show up as running in virt-manager. If the boot.sh script ran correctly, you should be able to SSH into them using the IP assigned by your DHCP server or the static IP you've established. Like I mentioned previously, I manage static IPs through my DHCP server, so my router assigned the servers the predetermined IP I set for them and automatically registered their hostname on the router's DNS gateway. Because of this, I was able simply ssh vm1 to get to my servers. You will need to adjust based on your own setup.

Once you are able to connect to your virtual server via SSH, you can take care of the final step of provisioning the rest of your storage. I used fdisk.

sudo fdisk /dev/sda

After creating the desired partition, I set up a mount point and added a line to fstab.

sudo mkdir /mystorage

sudo nano /etc/fstab

/dev/sda3 /mystorage ext4 defaults 0 0

And that was it, I was ready to set up my server. I finished this project a couple of weeks ago, so hopefully I've remembered everything. If I'm missing something, please let me know in the comments.

I’m a Software Architect with more than 15 years of experience creating high performance server and front-end applications targeting web and mobile platforms, & today I lead a team at Formidable Labs.

View Comments