A Kernel By Any Other Name


For legacy reasons we standardized our environment on Ubuntu Server. Generally when a new Ubuntu LTS release comes out, we, like many others, start deploying any new installations on the new release while we start planning upgrades for our most out-of-date servers. When 10.04 (lucid) was released, we already had everything up to 8.04 (hardy), so it it wouldn't be terribly painful to bring everything up to the newer release. At least that was our thinking, but when we installed our first 10.04 server, we got a surprise.

Some time ago, I'd been burned by a few bugs in the x86_64 version of the kernel that weren't in the 32 bit kernel. For years the 32 bit code had been a lot more tried and true while x86_64 wasn't nearly as hammered out. I'd made the decision to stick with 32 bit code everywhere possible, unless I had a specific need only provided by the 64 bit capabilities, like a single process that needs more than 2 gig of RAM. The new unit going in was going to be a 32 bit instance.

I use a combination of kickstart and scripts to install new machines. My script that runs on first boot performs the following to ensure we have the latest server kernel set to run on boot.

apt-get --assume-yes install linux-server

That is sort of a fire-and-forget command, so I don't usually go back and check on it. We were putting the server in as a response to increased load and needed it sooner rather than later. It was in, tested and in production for a couple of hours before I noticed the load average was about four times higher than the other servers (all of which were 8.04 - we had lucid in our testing environments but load testing there is always difficult to simulate exactly, so it wasn't until we saw the full production load that the problem was evident. A little investigation turned up the reason and a custom kernel compile solved it. It turned out to be something I should have known - that there *is* no server kernel package for x86 in lucid. The linux-server package is a pointer to linux-generic-pae (the desktop kernel with large memory support).

Why did I care about which kernel version was installed? Does it make any difference? The short answer is yes, very much. There are two major differences between the desktop and server kernel, irq timing and elevator. There are a few others. if you compare the standard desktop kernel and two kernel compilation configurations thusly:

diff --suppress-common-lines -y config-[version]-server config-[version]-generic

You'll see them all. The main differences are these:

CONFIG_DEFAULT_IOSCHED="deadline"     |      CONFIG_DEFAULT_IOSCHED="cfq" CONFIG_HZ=100                         |      CONFIG_HZ=250

The CONFIG_HZ refers to the frequency with which the kernel will stick its head up and look around for events that need to be handled, in times per second. You may think that 250 times per second > 100/s, more frequently must be better, and therefore this is a better setting, and for some types of workloads this is true. When you want to make sure your system is nice and snappy in response to your key presses, say if you are playing an arcade style game, it certainly makes sense to have that set a bit higher. On the other hand as the system comes under higher load and interrupt queues begin to grow, checking more frequently begins to exponentially increase the number of context switches necessary to pass events back and forth from the kernel to user space. As events per second goes up, reducing the number of checks per second make more and more of a difference in terms of increasing the server's capacity, and as the number of processors available to the system goes up the number of checks per second is multiplied by number of processors, so the number can be safely reduced depending on the system architecture. Usually there is a balance to be struck between request latency and throughput that can take some experimenting to optimize.

The scheduler, or elevator, is the algorithm the kernel uses to select which set of commands to run. The CFQ (completely fair queueing) scheduler is the default of the generic desktop kernel. It is designed to make sure every process, even ones that are seldom generating events, get equal time to run their tasks. This is great at smoothing out a desktop system under load and preventing latency in responding to user generated events via keyboard or mouse. It lays an egg on a system dedicated to a particular task like serving web pages or handling voip events. For server type workloads, you *want* a particular process to be able to dominate processors and disk access. This is what the deadline elevator is designed to do. As the name implies, it attempts to gaurontee that no task will wait for execution longer than a particular time. My problem with my newly deployed server above was that when the CFQ scheduler is giving precedence to more seldom used tasks, the events you actually want your system handling get older and older even as it is generating more and more interrupts, throw in a higher clock rate for more frequent checks and the problem is compounded, resulting in four times the load average.

I was able to fix this relatively quickly with a custom kernel compile with the options I needed, but I couldn't understand why the ubuntu kernel team would essentially abandon the server kernel for 32 bit systems. It is certainly easier for me to stay up to date on security and bug fixes when I can use a pre-packaged kernel, so this represented to me a slight inconvenience.

So I jumped on freenode #ubuntu-kernel channel and what I found out was that they knowingly, deliberately, and permanently axed the server kernel for x86. They were devoting resources to two architectures and the x86_64 kernel is the more buggy while there is increasing demand for it coinciding with decreasing demand for the x86 kernel. Meanwhile more and more people are moving to x86_64 then complaining about the quality and moving back or to another distro. (Like I did.) So they made the strategic decision to go to one server architecture, x86_64, since virtually all server class hardware supports it now and soon it will be as hammered and tested as the x86 kernel ever was. Keep this in mind when mapping out your server roadmap.

From now on I'll be keeping a closer eye on ubuntu-devel@lists.ubuntu.com and kernel-team@same.


I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Great Article!

AAnnoonnyymmoouuss's picture

This is really interesting to know more about the differences between the 32-bit and the 64-bit kernels. I makes me want to compile my own kernel.

Go ahead!

Greg Bledsoe's picture

Its a rite of passage. Once you've custom compiled your own kernel you've graduated from casual linux user to a certified Linux Guy! (Or Gal!)

I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king


Ariya's picture

Lately, I hear only Debian, Debian, and lot of Ubuntu-bashing. IMO, without Ubuntu coming into the scene, Debian would have been dead by now. Ubuntu gave lot of fresh air to the Linux Community and to the end-users. For the geeks, who wants to write in the CLI, any distro/os would be OK, but this world is populated by ordinary people, who use the computer for simple everyday tasks. More than 95%(?) of all Linux users don't really know how the innards of the GNU/Linux works, and don't really care. They want a Distro/OS to just work. They have the right to ask for such.

There are lot of dedicated people try their hand in making new distros, mostly using Ubuntu. And they come up with surprisingly good distros!

I would like to point out here to all those, who run Linux servers in (CLI) that they are a small % of all those, who use GNU/Linux distros everyday, so when going around Ubuntu-bashing think about us simple-ordinary end users!

Take care!

Don't disagree with you either!

Greg Bledsoe's picture

*I* certainly am not an Ubuntu-basher! I would never denegrate the efforts of people working hard to put out a good product. I would, on the other hand, point out where I felt it fell short, so they can make it better. Besides, most of my desktop and laptop systems are running on Ubuntu. My server preference doesn't have anything to do with Ubuntu not being any good, just that it doesn't meet *my* needs as well as other distributions. I think that has to do with how in tune/out of touch those making the architectural decisions for the distro are with people like me, who are implementing it in the Enterprise.

Ubuntu is a fine set of distributions, and has spawned an entire ecosystem of derivatives besides, and I don't think you'll catch me saying otherwise, except to say that Ubuntu Server is not mature for Enterprise use. (Even though I use it thusly, which is how I know.) :-D

Thanks for your comment, and have a nice day!

Keep 'em coming guys!

I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king

every distro has its qwirks

markh's picture

personally I run Ubuntu on my desktop (just to make multimedia and new hardware easy) but I run debian on our servers....total control with a text editor just appeals to me for some reason :D

Ubuntu makes my desktop easy but I dont want my server platforms at Shuttleworths wild whims of "lets throw out this this and this in favor of untested that that and that"


Not badly said

Greg Bledsoe's picture

I find nothing to disagree with there. :-)

I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king

Ok, 64 instead of 32 from now and for the future

Eduardo's picture


Despite the "religious distro debate" :) I would like to say YES, it's time to 'Bunto, Debian, RH, Suse and etc. to take 64 seriously. Ok, ok they are serious people and are doing a serous work... But the fact is there is lot of to do.

There is no more reason to use 32, since we all are buying 64 machines(desk/servers) at the very same price of 32, can use a lot more memory and take advantage on the 64 architecture.

But, in fact, I think is it not the time for the Kernel developers to joint its efforts to a single, structured, full functional, reliable and standardized 64 Kernel? Is it the system's core or not?

Also, we cannot forget, M$ is flying around... The old and well know strategy for convince people about the Win core unity could raises, fuelled with those multitude of Linux kernels.


How is *buntu 'legacy' ?

lefty.crupps's picture

> For legacy reasons we standardized our environment on Ubuntu Server.

How do you see Ubuntu Server as a 'legacy' product, or how does it fit into any sort of 'legacy' definition? How does a 6yr old distro have any sort of 'legacy' in the LinuxJournal.com environment, which must have been running a webserver far before any *buntu was released?

And why aren't you running Debian??


Greg Bledsoe's picture

I see what you are doing there -- but I will resist! :-D

I don't have a lot of interest in the religious distro debate - though I do think this warrants a little additional explanation. My wording is actually "for legacy reasons" not "X is legacy technology." As I said in another comment, Ubuntu was entrenched in the development process there, and my problem with that didn't really have anything to do with it being "legacy" as much as it does with it being "immature" and in many ways yet to fulfill its promise.

Matching distro to use case is a bit of a dark art, part science, part sorcery, part metaphysics, and in most cases there are no "right" answers. IN GENERAL: I prefer Ubuntu on the desktop and RH/CentOS on the server. There are cases to be made for other distros in particular cases, but there is also the need to balance the expertise you gain in the nuances of each over time with heavy use. So in particular, YMMV.

I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king

No need for kernel recompilation

Axel's picture

There is no need to recompile the kernel for setting the parameter CONFIG_HZ. You can add the parameter "divider=2" to your grub.conf (it seems this parameter has to be an integer). The actual frequency will then CONFIG_HZ/divider. And I think you must also add the parameter "nohz=off".

The I/O scheduler can be selected at boot time using the “elevator” kernel parameter, e.g. "elevator=deadline"

As a side note, my Lucid desktop installation uses 100Hz as default (x86_64) but will use a "tickless" (nohz) kernel.

That is *one* solution...

Greg Bledsoe's picture

...but I've been bitten by failed boot time parameter selections before. All things considered, I'm more comfortable with a recompile.

I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king

ubuntu isnt enterprise ready

Anonymous's picture

I'm constantly finding showstoppers in ubuntu server. Like the mysql init script doesn't work properly and the bug is marked "will not fix". Try a real enterprise distro such as redhat/centos.

I actually agree with you

Greg Bledsoe's picture

It isn't Enterprise ready. When I came to the organization, Ubuntu was an entrenched part of the process and was a battle I chose not to fight. Its close enough and with my brilliance and skill I can overcome its deficits. :-)

A big problem for me is that the ldap client authentication *still* doesn't work out of the box. When that is fixed, maybe I'll be ready to reconsider calling Ubuntu Server "Enterprise Ready."

I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king

One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix