View from the Trenches: Hitchhiker's Guide to a Bug Fix

A plea not to panic the next time a bug fix or security alert hits the newsgroups.

Recently, a big flap has arisen over vulnerabilities in OpenSSH and a new bug in OpenSSL, two packages that are cornerstones on which secure access to Linux is built. Of course, these vulnerabilities concern folks; many of us depend on SSH or SSL sessions for our livelihood. We do remote administration, retrieve e-mail and even do such mundane and ordinary tasks as pay bills. If these services aren't secure, we as Linux people could have a hard time keeping peanut butter and jelly on our dinner tables.

So the first advisory comes out, and it's no big deal. Then comes a followup, and a third one, and somewhere in there people start asking, "Is there a replacement for this?" Well, yes, there are several replacements for OpenSSH, some that are compatible, some that aren't as compatible. Most are free, but some are not.

And then there's the question of how to get the published fix working on one's own machine. Do you grab the fix off the developer's Web site, compile it from source and put it in as a local, unpackaged system? Do you wait for your distribution maker to publish a fix? How long is an acceptable wait? Who publishes first? Somehow, the idea of an easy way to roll one's own package gets lost in the shuffle. Admittedly, it's nowhere near as easy as configure-make-make-install or apt-get this or rpm that, but that's what stealing a good script is for, right? Perhaps not.

A lot of energy is given to the idea that it must be absolutely secure all the time, and any delay is bad news. Never mind the fact that, speaking from personal experience, when your system becomes large enough, it's going to take you anywhere from 48 hours to a week to get the new package in the door, test it with your own system peculiarities, schedule the appropriate downtime window and give your customers ample notice. Only then can you install the fixes on your backup system and fail over to put the new code in play. And, never mind the fact that, unless the black hats find the bug first--and so often these days, the white hats or the vendors themselves find things--there is anywhere from a week to two months' delay between when the bug is found and when exploits begin to be used in the wild.

So, when the OpenSSL bug landed in my lap and the first posted comment I saw was "Is there a package that replaces this?", my interest was piqued. I went to OpenSSL's news page and looked for security advisories. I found the current one (30 September), a small group surrounding the timing attack scheme back in March and April of this year, a buffer overflow bug fix from July/August of 2002. Prior to that, the only thing I saw marked as bug fix or security was posted in April of 2000--three years, three sets of bugs.

Amongst all this bug postings, I saw messages stating all the major vendors had published their own fixes, each in its inimitable fashion and in what I considered to be reasonable amounts of time. (As of this writing not everyone has checked in with OpenSSL, but it's been only a few hours since I first heard about it.) I dutifully went and did the little rain dance each of my different machines requires to update their packages and checked in with my boss to make sure the rest of the machines were receiving their appropriate penance. They all were, and nobody had posted any panicked messages of "I got hosed" or "exploit in the wild, look out". Outside of a groaning inbox, it was another quiet week, another routine security update.

Routine, that's a good word. Every good administrator has a routine: get a new machine, subscribe to the appropriate security mailing list, see a security update, read it, see if you have that package, grab the update, apply it, bounce the service if necessary and if the updater doesn't do so already. For what it's worth, Debian and SuSE usually do; Red Hat, Mandrake and Slackware usually do not. Your mileage may vary. I've had this routine since Red Hat 5, when I first had a Linux box on the live Internet, and I've yet to have a single machine I administer get cracked.

I know some people now are going to reach down, press Reply and say I'm not paranoid enough. But, I am reminded here of the philosophy behind PGP. PGP--or its GNU analog, GPG--is something a lot of us use every day. If you have an RPM-based system, you use it without even knowing about it, as most RPMs have a PGP/GPG signature. PGP, remember, stands for pretty good privacy--not Totally Secure Crypto or any other such absolute, but pretty good privacy. Today, PGP and similar systems, including OpenSSL and OpenSSH, are used worldwide to authenticate and protect various bits of data. Certainly, if somebody really wants your data, they can find a way to get enough CPU cycles to brute force the decryption and have your credit card number for lunch. Do we worry about it? No, not really. It's good enough.

So too, I think, is the effort to keep OpenSSL, OpenSSH and all of Linux updated good enough. Every piece of software is going to have bugs. As long as those bugs are reasonably few and squashed in a reasonable amount of time, and as long as the vendors keep up with things and don't keep us waiting weeks on end (at which point we can always resort to grabbing the maintainer's source), I'm content to let the Linux community keep on doing what it's doing--making and distributing the best operating systems in the world. I encourage you as a fellow Linux enthusiast or professional to do likewise. Don't panic. It's open source; it'll get fixed, quickly and well. When the day is done, the bits still flow and life goes on. My inbox thanks you.

Glenn Stone is a Red Hat Certified Engineer, sysadmin, technical writer, cover model and general Linux flunkie. He has been hand-building computers for fun and profit since 1999, and he is a happy denizen of the Pacific Northwest.




Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: View from the Trenches: Hitchhiker's Guide to a Bug Fix

Anonymous's picture

Exactly. Anyone who uses Linux for any length of time learns the importance of a high quality package management system. Debian provides dpkg and apt. Life is good. Once that functionality is in place, patching is no worse than any other update and dependancy management.

My pet peave is spinmeisters who point at security advisories and turn logic on its head by saying that a high patch rate is a BadThing(tm). When I first got into IT someone told me there are two promises that a salesman can make with confidence, "Software has bugs, and hardware breaks". Well, seeing as how bugs are inevitable, then what really counts is the timeliness of discovery, and speed with which a patch comes out. After that, it's just business (package management) as usual.

Not True

Anonymous's picture

Once Linux get's BSD like randome position independent compiled code (RedHat is doing this I think) then 99% of all exploits will be gone.

And if the distros would just run all their code through Valgrind they'd save themselves a lot of bug fixes later as well.

If those two things happened we'd see OpenBSD level security in Linux.

i sort of disagree

florin's picture

99% of all exploits will be gone
I wish i could agree. Yes, the remote root exploits will be made much more difficult. But it will be a statistical thing: the correct entry points in certain code segments will be randomized. That doesn't mean it will be impossible to exploit a code and get remote root. It just means it will be a lot harder (orders of magnitude).

This will not prevent classic DoS, i mean remotely crashing the service. The remote root will not work, but the attacker may still be able to crash that particular service.

But anyway, i'm not saying it's worthless. It's still a valuable protection, it's just not panacea.

P.S.: Yes, Red Hat appears to be the first distro to deliver this thing. Their next release (scheduled for launch this month) will include this kind of protection. I can barely sit still while waiting for it... :-)

One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix