Skip to Content

Impressions of the Virtualization Track on the Linuxtag 2007

This years Linuxtag was held in the ICC on the Berlin Fair ground (Messe Berlin). Besides several exhibition halls with booths, there was also a discussion forum and at least five halls were several talks were held about difference topics. I attended the Virtualization track, held in hall “Berlin” with a total of 7 talks. All tracks had Linux and virtualization in common. But all of them had a different look onto this technology (if I may call virtualization a technology). And basically all of them had a different focus, which doesn’t surprise because of the various virtualization technologies around, each one fits for someones needs.

The Tracks I’ve seen are as follows:

  • From one to many – Virtualization with free software — Sascha Wilde (Intervation GmbH) —
  • Free installation- management and monitoring tools for Xen infrastructures — Henning Sprang (Solpion GmbH) —
  • Managing any flavor of virtualization: openQRM in the enterprise data center — Matthias Rechenburg (freelancer) —
  • Xen: The Next Generation — Steven Hand (University of Cambridge) —
  • Linux Mainframe – KVM and the road of Virtualization — Joerg Roedel (AMD), Andre Przywara (AMD) Operating System Research Center —
  • Open-Source full x86 virtualization: InnoTek Virtual Box — Ulrich Möller (Virtual Box(InnoTek Systemberatung GmbH)) —
  • OS Circular: A Framework of Internet boot with virtual machine — Kuniyasu Suzaki (National Institute of Advanced Industrial Science and Technology) —

Due to my travel from Walldorf to Berlin on Wednesday morning, I missed the first 15 minutes of the first track. I’m not sad about missing it, because in general the first track was meant to warm up the audience (in my opinion). Besides Xen, QEMU and Virtual Box were introduced. By introducing I mean that there were some live examples shown, the differences explained and results of a benchmark shown. This benchmark was meant to be an I/O intensive one, but I really do not agree with that. The benchmark itself was extracting the vanilla kernel source and build it. Compared to real I/O setups, extracting a tar is nothing. But anyway, now the audience was somehow awake and ready for the next track.

The second talk was held by Henning Sprang about tools which support the daily work with Xen. As Mr. Sprang uses Debian Linux the focus was more on the Debian tools, which are unfortunately not the same as the Red Hat or Novell/SUSE tools. But anyway, virt-manager and YaST were listed as installation tools for Red Hat and Novell/SUSE. For Debian one would use xen-tools to install new virtual machines. Management tools mentioned are: virt-manager, xen-Shell, xenman, Enomalism, Argo/Argos, DTC-Xen and openQRM. The most promising management tool is openQRM. Since openQRM is the topic of the next talk, I’ll skip explaining it here. Monitoring tools for Xen are xentop, xm, usher,, xmlpulse. Furthermore the three existing Nagios plugins were mentioned:, check_xen and As Mr. Sprang intention was not to indoctrinate us with one solution for every task but leaves the choice to oneself all I can say, that I will definitely will try some of these tools.

The talk after Mr. Sprangs was held by Mr. Matthias Rechenburg. The topic was “Managing any flavor of virtualization: openQRM in the enterprise data center”. openQRM, originally developed by Qluster, became open source in 2006. Its goal is to unify the virtual system management. openQRM can manage VMWare, QEMU, Xen and Linux VServer using one unified graphical user interface. But openQRM is not only a GUI. Having an existing infrastructure with Linux VServer, VMWare, Xen or QEMU, openQRM is able to create images from the existing virtual machines. These images can later be provided as netboot images for new virtual machines. openQRM runs on freeBSD, Windows, Solaris (x86_64 and SPARC) and of course Linux. Additionally there are also Nagios plugins available. Just have a look on the homepage of openQRM for more details:

The next talk was meant to be the official key note of the Linuxtag. Steven Hand from the Cambridge University had the chance to give a 75 minute talk about “Xen: The Next Generation”. Besides the introduction in the Xen methodology, the difference between Xen and Xen Enterprise the most interesting part was the overview and detailed description of the new features of the upcoming 3.1 version of Xen. One feature is that it is now possible to have 32-bit guest systems running on a 64-bit host system. Another key feature is the release of the xenAPI 1.0 which will be the main and stable API for Xen. Other new features only affect the work with HV machines. With Xen 3.1 dynamic memory control is introduced for HV guest system as well as preliminary save, restore and migrate support for such VM’s.

Steven Hands also gave an outlook into new performance related work in the HVM area. Having HVM virtualized machines with PV enabled device drivers (e.g. network or storage) will be very effective. The HVM kernel still doesn’t know that it runs as a virtual machine, but the device driver does. He showed a performance comparison on a gigabit Ethernet transmission benchmark between HVM, HVM with PV enabled drivers and PV itself. While HVM does only provide 8% of the theoretical throughout of 1GBit/sec, HVM with PV enabled drivers reached around 80% and PV reached around 95%. I’m really looking forward to see some benchmarks using PV enabled storage drivers rather then network drivers.

The next talk was held by Joerg Roedel and Andre Przywara, both from AMD Operating System Research Center in Dresden, Saxony, Germany. Their talk covered the current status of KVM, Kernel based Virtual Machine. Before they started they introduced a virtualization terminology which in my opinion is very good. Here it is:

  • Host: physical machine
  • Guest: emulated or virtual machine running on a host
  • Hypervisor: system software that manages virtualization
  • PV (para virtualization): modified guest
  • HV (hardware virtualized): unmodified guest
  • dom0: (Xen term) supporting system for hypervisor, can only be PV
  • domU: (Xen term) guest system, can be PV or HV

After explaining the key features of VMWare and Xen, they moved forward and introduced the concept of KVM and its differences between the other two virtualization technologies. KVM is a hypervisor inside the Linux kernel and not like in the Xen approach beneath the kernel. KVM is implemented as a set of kernel modules. KVM cannot emulate (like VMWare) or para virtualize (like Xen) virtual machines. It only provides the HV option. Thus the I/O and network performance for example is not good, because KVM uses QEMU for accessing devices. Other restrictions are, that a virtual machine can only have 2GB of memory because of the heavy use of singed int in the QEMU and KVM code. KVM doesn’t provide SMP functionality inside the virtual machines.

After their overview of current problems they gave an overview of which features requests are already addressed and will be implemented in the future versions. These features are the use of softMMU to enable shadow page tables, the availability of nested page tables, better PCI access routines and enhanced interrupt sharing.

Closing the talk with the outlook of future KVM enhancements, the talk afterwards, held by Ulrich Möller from InnoTek Systemberatung GmbH, focuses on Virtual Box, another virtualization technology. The homepage is Virtual Box comes in two versions, a commercial one and an open source one. Further information is listed on the homepage. As the new version 1.4 is coming out soon, there will be many new features available, like support of Linux 64-bit. Therefore I can’t say much about this version. Lets wait for its release and check was it can or can not.

Unfortunately, my train back home was scheduled at 6pm so I could not attend the whole talk of Kuniyasu Suzaki from the National Institute of Advanced Industrial Science and Technology. The topic was “OS Circular”: A Framework of Internet boot with virtual machine, which sounds interesting. Mr. Suzaki maintains the Xenoppix project (Xen + Knoppix) himself. In fact, OS Circular gives one the ability to boot virtual machines over the HTTP protocol. For more information please have a look at link on the top of this page. The complete virtualization track agenda incl. some slides of the presenters can be found there.

Overall, it was a very interesting to attend that many talks on one day. It was packed with lots of technical fact, open source ideas and some marketing stuff as well. I really enjoyed the day and I look forward to the next Linuxtag, where maybe the virtualization technology is more advanced and the talks are more suited for enterprise business needs.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Davide Cavallari
    You say that VMWare can “emulate” and Xen “virtualize”.

    Probably you’re referring to the “para-virtualization” capability of Xen, opposed to the “hardware virtualization” provided by KVM and Xen itself on CPUs which support HVM (INTEL’s Vanderpool or AMD’s Pacifica-capable CPUs).

    I wonder what the term “emulate” means, as far as VMWare is concerned.

    Thanks, Davide

    1. Davide Cavallari
      Specifically, you say “KVM cannot emulate (like VMWare) virtual machines”.

      The Wikipedia entry for “virtualization”:

      Emulation or simulation
      the virtual machine simulates the complete hardware, allowing an unmodified “guest” OS for a completely different CPU to be run

      I’m not aware of such a VMWare capability. The guest OS should be the same as the host OS, i.e. x86 or AMD64. Is this right?

      Thanks, Davide

  2. Davide Cavallari
    It’d be interesting if you shared your experience in the following thread:
    Virtual Windows XP on a linux desktop

    In the mentioned scenario, the only choice is to run an unmodified OS on the guest system. This requires the HV option, therefore I suppose the resulting performance would be similar both with Xen and KVM.

    On the other hand, I wonder whether other solutions (such as VirtualBox or VMWare Server) would be better in that scenario, where I/O and network performance is quite critical.

    Thanks, Davide


Leave a Reply