A Kernel By Any Other Name
For legacy reasons we standardized our environment on Ubuntu Server. Generally when a new Ubuntu LTS release comes out, we, like many others, start deploying any new installations on the new release while we start planning upgrades for our most out-of-date servers. When 10.04 (lucid) was released, we already had everything up to 8.04 (hardy), so it it wouldn't be terribly painful to bring everything up to the newer release. At least that was our thinking, but when we installed our first 10.04 server, we got a surprise.
Some time ago, I'd been burned by a few bugs in the x86_64 version of the kernel that weren't in the 32 bit kernel. For years the 32 bit code had been a lot more tried and true while x86_64 wasn't nearly as hammered out. I'd made the decision to stick with 32 bit code everywhere possible, unless I had a specific need only provided by the 64 bit capabilities, like a single process that needs more than 2 gig of RAM. The new unit going in was going to be a 32 bit instance.
I use a combination of kickstart and scripts to install new machines. My script that runs on first boot performs the following to ensure we have the latest server kernel set to run on boot.
apt-get --assume-yes install linux-server
That is sort of a fire-and-forget command, so I don't usually go back and check on it. We were putting the server in as a response to increased load and needed it sooner rather than later. It was in, tested and in production for a couple of hours before I noticed the load average was about four times higher than the other servers (all of which were 8.04 - we had lucid in our testing environments but load testing there is always difficult to simulate exactly, so it wasn't until we saw the full production load that the problem was evident. A little investigation turned up the reason and a custom kernel compile solved it. It turned out to be something I should have known - that there *is* no server kernel package for x86 in lucid. The linux-server package is a pointer to linux-generic-pae (the desktop kernel with large memory support).
Why did I care about which kernel version was installed? Does it make any difference? The short answer is yes, very much. There are two major differences between the desktop and server kernel, irq timing and elevator. There are a few others. if you compare the standard desktop kernel and two kernel compilation configurations thusly:
diff --suppress-common-lines -y config-[version]-server config-[version]-generic
You'll see them all. The main differences are these:
CONFIG_DEFAULT_IOSCHED="deadline" | CONFIG_DEFAULT_IOSCHED="cfq" CONFIG_HZ=100 | CONFIG_HZ=250
The CONFIG_HZ refers to the frequency with which the kernel will stick its head up and look around for events that need to be handled, in times per second. You may think that 250 times per second > 100/s, more frequently must be better, and therefore this is a better setting, and for some types of workloads this is true. When you want to make sure your system is nice and snappy in response to your key presses, say if you are playing an arcade style game, it certainly makes sense to have that set a bit higher. On the other hand as the system comes under higher load and interrupt queues begin to grow, checking more frequently begins to exponentially increase the number of context switches necessary to pass events back and forth from the kernel to user space. As events per second goes up, reducing the number of checks per second make more and more of a difference in terms of increasing the server's capacity, and as the number of processors available to the system goes up the number of checks per second is multiplied by number of processors, so the number can be safely reduced depending on the system architecture. Usually there is a balance to be struck between request latency and throughput that can take some experimenting to optimize.
The scheduler, or elevator, is the algorithm the kernel uses to select which set of commands to run. The CFQ (completely fair queueing) scheduler is the default of the generic desktop kernel. It is designed to make sure every process, even ones that are seldom generating events, get equal time to run their tasks. This is great at smoothing out a desktop system under load and preventing latency in responding to user generated events via keyboard or mouse. It lays an egg on a system dedicated to a particular task like serving web pages or handling voip events. For server type workloads, you *want* a particular process to be able to dominate processors and disk access. This is what the deadline elevator is designed to do. As the name implies, it attempts to gaurontee that no task will wait for execution longer than a particular time. My problem with my newly deployed server above was that when the CFQ scheduler is giving precedence to more seldom used tasks, the events you actually want your system handling get older and older even as it is generating more and more interrupts, throw in a higher clock rate for more frequent checks and the problem is compounded, resulting in four times the load average.
I was able to fix this relatively quickly with a custom kernel compile with the options I needed, but I couldn't understand why the ubuntu kernel team would essentially abandon the server kernel for 32 bit systems. It is certainly easier for me to stay up to date on security and bug fixes when I can use a pre-packaged kernel, so this represented to me a slight inconvenience.
So I jumped on freenode #ubuntu-kernel channel and what I found out was that they knowingly, deliberately, and permanently axed the server kernel for x86. They were devoting resources to two architectures and the x86_64 kernel is the more buggy while there is increasing demand for it coinciding with decreasing demand for the x86 kernel. Meanwhile more and more people are moving to x86_64 then complaining about the quality and moving back or to another distro. (Like I did.) So they made the strategic decision to go to one server architecture, x86_64, since virtually all server class hardware supports it now and soon it will be as hammered and tested as the x86 kernel ever was. Keep this in mind when mapping out your server roadmap.
From now on I'll be keeping a closer eye on email@example.com and kernel-team@same.
I was cloud before cloud was cool. Not in the sense of being an amorphous collection of loosely related molecules with indeterminate borders -- or maybe I am. Holla @geek_king, http://twitter.com/geek_king
|Happy Birthday Linux||Aug 25, 2016|
|ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs||Aug 24, 2016|
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
|What I Wish I’d Known When I Was an Embedded Linux Newbie||Aug 18, 2016|
|Pandas||Aug 17, 2016|
- Happy Birthday Linux
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs
- What I Wish I’d Known When I Was an Embedded Linux Newbie
- Updates from LinuxCon and ContainerCon, Toronto, August 2016
- New Version of GParted
- NVMe over Fabrics Support Coming to the Linux 4.8 Kernel
- All about printf
- Tor 0.2.8.6 Is Released
- Tech Tip: Really Simple HTTP Server with Python
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide