Introduction to the Xen Virtual Machine

Everyone's talking about Xen, but the code is complex. Here's a starting point.
The Xend Daemon

First, what is the Xend daemon? It is the Xen controller daemon, meaning it handles creating new domains, destroying extant domains, migration and many other domain management tasks. A large part of its activity is based on running an HTTP server. The default port of the HTTP socket is 8000, which can be configured. Various requests for controlling the domains are handled by sending HTTP requests for domain creation, domain shutdown, domain save and restore, live migration and more. A large part of the Xend code is written in Python, and it also uses calls to C methods from within Python scripts.

We start the Xend daemon by running from the command line, after booting into Xen, xend start. What exactly does this command involve? First, Xend requires Python 2.3 to support its logging functions.

The work of the Xend daemon is based on interaction with an XCS server, the control Switch. So, when we start the Xend daemon, we check to see if the XCS is up and running. If it is not, we try to start XCS. This step is discussed more fully later in this article. .

The SrvDaemon is, in fact, the Xend main program; starting the Xend daemon creates an instance of SrvDaemon class (tools/python/xen/xend/server/SrvDaemon.py.). Two log files are created here, /var/log/xend.log and /var/log/xend-debug.log.

We next create a Channel Factory in createFactories() method. The Channel Factory has a notifier object embedded inside. Much of the work of the Xend daemon is based on messages received by this notifier. This factory creates a thread that reads the notifier in an endless loop. The notifier delegates the read request to the XCS server; see xu_notifier_read() in xen/lowlevel/xu.c. This method sends the read request to the XCS server by calling xcs_data_read().

Creating a Domain

The creation of a domain is accomplished by using a hypercall (DOM0_CREATEDOMAIN). What is a hypercall? In the Linux kernel, there is a system call with which a user space can call a method in the kernel; this is done by an interrupt (Int 0x80). In Xen, the analogous call is a hypervisor call, through which domain 0 calls a method in the hypervisor. This also is accomplished by an interrupt (Int 0x82). The hypervisor accesses each domain by its virtual CPU, struct vcpu in include/xen/sched.h.

The XendDomain class and the XendDomainInfo class play a significant part in creating and destroying domains. The domain_create() method in XendDomain class is called when we create a new domain; it starts the process of creating of a domain.

The XendDomainInfo class and its methods are responsible for the actual construction of a domain. The construction process includes setting up the devices in the new domain. This involves a lot of messaging between the front end device drivers in the domain and the back end device drivers in the back end domain. We talk about the back end and front end device drivers later.

The XCS Server

The XCS server opens two TCP sockets, the control connection and the data connection. The difference between the control connection and the data connection is the control connection is synchronous while the data connection is asynchronous. The notifier object, which was mentioned before, for example, is a client of the XCS server.

A connection to the XCS server is represented by an object of type connection_t. After a connection is bound, it is added to a list of connections, connection_list, which is iterated every five seconds to see whether new control or data messages arrived. Control messages, which can be control or data messages, are handled by handle_control_message() or by handle_data_message(), respectively.

Creating Virtual Devices When Creating a Domain

The create() method in XendDomainInfo starts a chain of actions to create a domain. The virtual devices of the domain first are created. The create() method calls create_blkif() to create a block device interface (blkif); this is a must even if the VM doesn't use a disk. The other virtual devices are created by create_configured_devices(), which eventually calls the createDevice() method of DevController class (see controller.py). This method calls the newDevice() method of the corresponding class. All the device classes inherit from Dev, which is an abstract class representing a device attached to a device controller. Its attach() abstract (empty) method is implemented in each subclass of the Dev class; this method attaches the device to its front end and back end. Figure 2 shows the devices hierarchy, and Figure 3 shows the device controller hierarchy.

Figure 2

Figure 3

Domain 0 runs the back end drivers, and the newly created domain runs the front end drivers. A lot of messages pass between the back end and front end drivers. The front end driver is a virtual driver in the sense that it does not use specific hardware details; the code resides in drivers/xen, in the sparse tree.

Event channels and shared-memory rings are the means of communication among domains. For example, in the case of netfront device (netfront.c), which is the network card front end interface, the np->tx and the np->rx are the shared memory pages, one for the receiver buffer and one for the transmitted buffer. In send_interface_connect(), we tell the netback end to bring up the interface. The connect message travels through the event channel to the netif_connect() method of the back end, interface.c. The netif_connect() method calls the get_vm_area(2*PAGE_SIZE, VM_IOREMAP)). The get_vm_area() method searches in the kernel virtual mapping area for an area whose size equals two pages.

In the blkif case, which is the block device front end interface, blkif_connect() also calls get_vm_area(). In this case, however, it uses only one page of memory.

The interrupts associated with virtual devices are virtual interrupts. When you run cat /proc/interrupts from domainU, look at the interrupts with numbers higher than 256; they are labeled "Dynamic-irq".

How are IRQs redirected to the guest OS? The do_IRQ() method was changed to support IRQs for the guest OS. This method calls __do_IRQ_guest() if the IRQ is for the guest OS, xen/arch/x86/irq.c. The __do_IRQ_guest() uses the event channel mechanism to send the interrupt to the guest OS, send_guest_pirq() method in event_channel.c.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hele me please

Anonymous's picture

This is my course work in School.
1. Example applied of Xen & VM (Virtual Machine)
2. Description of relation between VM and OS
3. Example applied of VM now
Hele me please.

Some performace test result

ericzqma's picture

I installed Xen on two of our server. The kernel and xen are the modern versions (The platform information can be found here ). I have done some test on this platform, the result is that the CPU performance is nearly 100% while the memory performance is only 90% compared to the physical machine. Details can also be found here . I am satisfied by the cpu performance. Maybe the 10% memory overhead is a bit large. I am wondering whether there are some mistake in my configuration or how to improve it. -- Eric

Good

Laks's picture

Good article - shown few concepts behind Xen and it's useful for beginners.

is it possible to do the memory copy operation between two VM's?

Anonymous's picture

is it possible to do the memory copy operation between two guest VM's directly through through XEN without involving dom 0?

Inter-Domain Comms

Mr. B's picture

Hello,

It is a little confusing on how you have described the interaction between XenStore and a domain. How exactly does a Domain interact with Xenstore i.e. TCP ports, sockets, etc...? Since XenStore resides in ring 3 how does it access the hypervisor itself? Thanks.

Mr. B

xen against qemu/bochs

Anonymous's picture

With Xen on x86(_32) running guest OS kernel in ring 1 and guest OS applications in ring 3 carefuly exploited guest OS is wide open door to hijack host OS root applications in ring 3 and this way compromise host OS.

That's something I guess can't happen with qemu/bochs etc. Other words: you trade that for speed.

And at first guess the enhanced CPU architecture will have just tags at descriptors and more complicated descriptor access rules to enable more page tables separated and being loaded simultaneously switched/selected on demand and privileges. But then how can it provide applications/OSes existing in different page tables with similar amount of cpu time to run? Maybe someone can summarize the tech a bit and publish it?

carefuly exploited guest OS

Anonymous's picture

Xen does validation of memory accesses

does xen overhead include OS overhead?

undefined's picture

one clarification that i need is if the 3% overhead of xen includes the overhead of running multiple identical guest kernels. yes, xen adds 3% overhead, but is there also some duplication when running 3 linux kernels, whether in memory or in processing?

i recently investigated viritualization for the purposes of consolidating, yet keeping partitioned, a linux server & desktop. as there is very little difference between my current linux server & desktop kernels, i would prefer not to duplicate the linux kernel, but merely have different userlands. i am currently testing linux vserver as it allows me to run a single linux kernel, but maintain multiple userland "instances", each "instance" with its own ip address and other features.

granted vserver, chroot, etc does not help when a user wants to run different operating systems (linux & windows), and if full separation between userland images, even down to the kernel level (kernel-level exploits, user-visible features like nfsd, etc), is desired, then xen is the proper tool for that job. heck, give the xen livecd a test drive and marvel at xen's accomplishment.

just wanted to share my holiday weekend's research to help save someone else some time.

we tested it recently. yup,

Anonymous's picture

we tested it recently. yup, it involves 3% overhead on simple operations, but overhead is more than 20-30% on disk I/O, network etc.
And sure, memory pressure/requirements you mentioned are rather big.

I would recommend you to take a look at OpenVZ project as well. It is more mature, than vserver. We successfully run 30-50 VPSs on 1GB of RAM with it.

disk/network io

Anonymous's picture

Why not use separate drives for each server slice instead of a file system on a file? Perhaps separate network cards also?

This might mitigate the slow down but perhaps satiating the buses.

Anyone doing that?

Cheers,
-b

disk i/o

Luke Crawford's picture

things run much faster if you give each domU it's own partition. LVM helps a lot here, both to run many small domains on one disk, and to keep track of who owns what partition.

anyone care to write proof of concept exploit?

Anonymous's picture

> (kernel-level exploits,

I guess this may be still issue with xen compared to qemu/bochs. It's not that straight forward but have a look at the access privileges model behind ring. Once you gain ring 1 privileges then the userland of host OS is toast.

windows applications

Anonymous's picture

I am curios, after the VT and Pacifica gets in and you can then run windows on xen directly, could you run games, graphics, etc...
I guess it depends what kind of drivers xen would provide or allow access too. Anyone?
I.e. work on linux and windows in tandem.
For example, applications that can't or have not yet been ported to linux will work on windows (such as games, proprietery...) and the rest would be linux.

Sadly the SMP support is rath

Anonymous's picture

Sadly the SMP support is rather unstable (and therefore currently only in xen unstable. :-) ).

VMware Community Source is nonsense

Anonymous's picture

VMware's "Community Source" program is exactly like open source, only they don't share their software with anyone except their corporate partners, and don't share the contributed code.

Agreed: VMware Community Source is a load...

Anonymous's picture

I've been reading VMware press releases for the last few weeks with zero substance except how they were going to "open" something up. I went to Intel's Developer Forum and spoke with numerous developers from IBM, HP and Intel at and asked them straight up what the deal was. I asked, "Where is the "open code"? They all kind of (quietly) said the same thing. VMware is getting freaked out by Xen and wanted some press. In reality, they may document a few more APIs, but this is just a load...

Author Response

Rami's picture

Hello,

I had written in this article about the advantages and disadvantages of the Xen and VMWare virtualization solutions.
One of the Xen advantages I pointed out was it was
free and open source project.

I felt it will be unfair not to mention that VMware
started that Community Source program in the beginning
of this August.

In the article I wrote aboout this Community Source program : "..it will be providing its partners with access to VMware ESX Server source code"; also VMWare news release (to which I gave a link) talks about giving source to ***partners***.

I think your comment should be read considering this and in this light.

Regards,
Rami Rosen

Where can i read your article?

Anonymous's picture

Hi,

I was wondering if there is a link or web page where I can read your article about the advantages and disadvantages of Xen and Vmware.

Where can I find it? is it online?

Thanks.

Gabriel

NetBSD doesn't need to get patched

hubertf's picture

NetBSD has native support for Xen for some time in the official releases now, and does NOT need to be patched. See www.NetBSD.org/Ports/xen for more information.

- Hubert

no POWER5 support

Hollis Blanchard's picture

I am one of the developers working on the PowerPC port of Xen, and we are supporting the PowerPC 970, not POWER5.

Xen in IBM

Anonymous's picture

Hello,

Please look here:

http://lwn.net/Articles/139964/

It says:
...
IBM is working on Power5 support...
...

Are you shure your team is the only one in IBM working on
Xen ?

Yes, I'm positive. The LWN pa

Hollis Blanchard's picture

Yes, I'm positive. The LWN page is also incorrect, though it cites its source so you can see where the information comes from.

Why bother with POWER5 support?

Anonymous's picture

Why would IBM waste resources on POWER5 support? They already have a rock-solid micropartitioning and virtualization environment on the POWER5 that supports Linux, and one that appears to provide even greater protection across partitions then xen does with domains. I'm running my own distribution on one as I write this, and I'm sold. I'd rather manage a SAN-backed POWER5 installation over a blade server any day.

I can see a big advantage for the PPC970, though, given that you can get JS20 blades for their blade center, and the HS20 already.

More on VM and Emulators

moma's picture
Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix