Paranoid Penguin - Running Network Services under User-Mode Linux, Part III

Fine-tune and firewall your UML guest systems.

In the last two Paranoid Penguin columns, I walked you through the process of building a virtual network server using User-Mode Linux. We built both host and guest kernels, obtained a prebuilt root filesystem image, configured networking on the host, and when we left off last month, we finally had booted our guest kernel with bridged networking, ready for configuration, patching and server software installation.

This month, I tie up some loose ends in our example guest system's startup and configuration, show you the uml_moo command, demonstrate how to write firewall rules on your UML host system, offer some miscellaneous security tips and give some pointers on creating your own root filesystem image. And, can you believe we will have scratched only the surface of User-Mode Linux, even after three articles? Hopefully, we'll have scratched deeply enough for you to be off to a good start!

Guest System Configuration

You may recall that last time we set up bridged networking on our host, creating a local tunnel interface called uml-conn0 that we bridged to the host system's “real” eth0 interface. If you don't have last month's column, my procedure was based on the one by David Cannings (see the on-line Resources). When we then started up our host (User-Mode) kernel, we mapped a virtual eth0 on the guest to uml-conn0 via a kernel parameter, like so:

umluser@host$ ./debkern ubd0=debcow,debroot
 ↪root=/dev/ubda eth0=tuntap,uml-conn0

The last parameter, obviously, contains the networking magic: eth0=tuntap,uml-conn0. It can be translated to “the guest kernel's eth0 interface is the host system's tunnel/tap interface uml-conn0”. This is important to understand; to the host (real) system, the guest's Ethernet interface is called uml-conn0, but to the guest system itself, its Ethernet interface is plain-old eth0.

Therefore, if you run an iptables (firewall) rule set on either host or guest (I strongly recommend you do so at least on the host), any rules that use interface names as sources or targets must take this difference in nomenclature into account. We'll discuss some example host firewall rules shortly, but we're not quite done with guest-kernel startup parameters yet.

Going back to that startup line, we've got definitions of our virtual hard drive (ubd0, synonymous with ubda), our path to virtual root and, of course, our virtual Ethernet interface. But what about memory?

On my OpenSUSE 10.1 host system, running a UML Debian guest with the above startup line resulted in a default memory size of about 29MB—pretty puny by modern standards, especially if I want that guest system to run real-world, Internet-facing network services. Furthermore, I've got an entire gigabyte of physical RAM on my host system to allocate; I easily can spare 256MB of RAM for my guest system.

To do so, all I have to do is pass the parameter mem=256M to the guest kernel, like so:

umluser@host$ ./debkern mem=256M ubd0=debcow,debroot
 ↪root=/dev/ubda eth0=tuntap,uml-conn0

Obviously enough, you can specify however much more or less than that as you like, and you can allocate different amounts of RAM for multiple guests running on a single host (perhaps 128M for your virtual DNS server, but 512M for your virtual Web server, for example). Just be sure to leave enough non-guest-allocated RAM for your host system to do what it needs to do.

Speaking of which, you'll save a lot of RAM on your host system by not running the X Window System, which I've always recommended against running on hardened servers anyhow. The X server on my test host uses around 100MB, with actual desktop managers requiring more. On top of this, the X Window System has a history of security vulnerabilities with varying degrees of exploitability by remote attackers (remember, a “local” vulnerability ceases being local the moment any non-local user starts a shell).

Managing COW Files

If, as I recommended last month, you run your UML guest with a Copy on Write (COW) file, you may be wondering whether your UML guest-kernel startup line is the only place you can manage COW files. (A COW file is created automatically when you specify a filename for one in your ubd0=... parameter.)

Actually, the uml-utilities package includes two standalone commands for managing COW files: uml_moo and uml_mkcow. Of the two, uml_moo is the most likely to be useful to you. You can use uml_moo to merge all the filesystem changes contained in a COW file into its parent root filesystem image.

For example, if I run the example UML guest kernel startup command described earlier, and from within that UML guest session I configure networking, apply all the latest security patches, install BIND v9 and configure it and finally achieve a “production-ready” state, I may decide that it's time to take a snapshot of the UML guest by merging all those changes (written, so far, only into the file debcow) into the actual filesystem image (debroot). To do so, I'd use this command:

umluser@host$ uml_moo ./debcow newdebroot

The first argument you specify to uml_moo is the COW file you want to merge. Because a COW file contains the name of the filesystem image to which it corresponds, you don't have to specify this. Normally, however, you should specify the name of the new filesystem image you want to create.

My example uml_moo command, therefore, will leave the old root filesystem image debroot intact (maybe it's also being used by other UML guests, or maybe I simply want to preserve a clean image), creating a new filesystem named newdebroot that contains my fully configured and updated root filesystem.

If I want to do a hard merge, however, which replaces the old filesystem image with the merged one (with the same filename as before), perhaps because my hard disk is too full for extra image files, I'd instead use uml_moo -d ./debcow (the -d stands for destructive merge).



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More frameworks would be nice

frankie's picture

Currently there are at least xen, vserver and openvz as possible alternatives to uml based solution for virtual machines and/or compartimentized components. Each of them has pro and cons. Presenting also the other frameworks in future articles would be a nice thing.