Running Network Services under User-Mode Linux, Part I
Okay, we've got UML host capabilities, but we still need a guest kernel to run. This process is somewhat simpler than the host-kernel build, because we don't need the skas patch.
First, navigate back to the directory in which your Linux kernel-source tarball resides, and unpack it a second time. Remember when we renamed the unzipped source code directory? This was so we could unpack the kernel tarball a second time. We need to build our host and guest kernels in separate source trees.
On my Debian test system, therefore, I unpacked the source tarball to /usr/src/linux-126.96.36.199, and this time, renamed it to /usr/src/linux-188.8.131.52-guest. Again, change ownership of this directory to a nonprivileged user, and change your working directory to it.
Again, at this point we can skip the step of applying the skas patch. Because we're going to compile our kernel for the special um (User-Mode Linux) architecture rather than for a real architecture like x86, I recommend you prepare your source code tree with the following three commands:
host:/usr/src/linux-184.108.40.206-guest$ make mrproper ARCH=um host:/usr/src/linux-220.127.116.11-guest$ make defconfig ARCH=um host:/usr/src/linux-18.104.22.168-guest$ make menuconfig ARCH=um
The make mrproper command clears out any configuration and object files in your source tree; make defconfig generates a fresh default configuration file appropriate to the um architecture; and make menuconfig, of course, gives you the opportunity to fine-tune this configuration file.
Pay particular attention to the following:
Life will be simpler if you skip loadable kernel module support and hard-compile everything into the kernel. If you really want kernel modules, see the User-Mode Linux HOWTO, Section 2.2 (see Resources).
Under Processor type and features, double-check that your system architecture is set to um (User-Mode Linux), and make sure /proc/mm is enabled.
Under Networking options, make sure IP: tunneling and 802.1d Ethernet Bridging are enabled.
Under Network device support, enable Universal TUN/TAP device driver support.
Disable as many of the specialized hardware kernel modules as possible; this kernel is going to be running on virtualized hardware, so you won't need support for wireless LAN hardware, obscure parallel-port devices and so forth.
Once you've saved your new configuration file, you can compile the kernel with this command (without first becoming root; execute this as an unprivileged user):
host:/usr/src/linux-22.214.171.124-guest$ make linux ARCH=um
Note that I did not tell you to make a zipped or bzipped image. Remember, you're going to be running this kernel as though it were a user-space command, so it shouldn't be compressed. The finished kernel will be located in the top-level directory of your source tree (/usr/src/linux-126.96.36.199-guest in the above examples) and will be named linux—you'll probably want to rename it to something more descriptive, such as uml-guestkernel-188.8.131.52. You'll also probably want to move it to the directory from which you intend to run it—perhaps something like /usr/local/uml/.
By the way, don't be scared by the size of your guest kernel file. Most of that bulk is symbol information that will not be loaded into memory when you execute it.
Your host system now fully supports User-Mode Linux, and you've got a guest kernel image to run. The next step is to obtain or create a root filesystem image to use with the guest kernel. That's where we'll pick up again next time!
Resources for this article: /article/9260.
Mick Bauer (firstname.lastname@example.org) is Network Security Architect for one of the US's largest banks. He is the author of the O'Reilly book Linux Server Security, 2nd edition (formerly called Building Secure Servers With Linux), an occasional presenter at information security conferences and composer of the “Network Engineering Polka”.
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
|Trying to Tame the Tablet||May 08, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Readers' Choice Awards
- Please correct the URL for Salt Stack's web site
20 min 26 sec ago
- Android is Linux -- why no better inter-operation
2 hours 35 min ago
- Connecting Android device to desktop Linux via USB
3 hours 4 min ago
- Find new cell phone and tablet pc
4 hours 2 min ago
5 hours 31 min ago
- Automatically updating Guest Additions
6 hours 39 min ago
- I like your topic on android
7 hours 26 min ago
- Reply to comment | Linux Journal
7 hours 47 min ago
- This is the easiest tutorial
14 hours 1 min ago
- Ahh, the Koolaid.
19 hours 40 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?