View from the Trenches: Hitchhiker's Guide to a Bug Fix
Recently, a big flap has arisen over vulnerabilities in OpenSSH and a new bug in OpenSSL, two packages that are cornerstones on which secure access to Linux is built. Of course, these vulnerabilities concern folks; many of us depend on SSH or SSL sessions for our livelihood. We do remote administration, retrieve e-mail and even do such mundane and ordinary tasks as pay bills. If these services aren't secure, we as Linux people could have a hard time keeping peanut butter and jelly on our dinner tables.
So the first advisory comes out, and it's no big deal. Then comes a followup, and a third one, and somewhere in there people start asking, "Is there a replacement for this?" Well, yes, there are several replacements for OpenSSH, some that are compatible, some that aren't as compatible. Most are free, but some are not.
And then there's the question of how to get the published fix working on one's own machine. Do you grab the fix off the developer's Web site, compile it from source and put it in as a local, unpackaged system? Do you wait for your distribution maker to publish a fix? How long is an acceptable wait? Who publishes first? Somehow, the idea of an easy way to roll one's own package gets lost in the shuffle. Admittedly, it's nowhere near as easy as configure-make-make-install or apt-get this or rpm that, but that's what stealing a good script is for, right? Perhaps not.
A lot of energy is given to the idea that it must be absolutely secure all the time, and any delay is bad news. Never mind the fact that, speaking from personal experience, when your system becomes large enough, it's going to take you anywhere from 48 hours to a week to get the new package in the door, test it with your own system peculiarities, schedule the appropriate downtime window and give your customers ample notice. Only then can you install the fixes on your backup system and fail over to put the new code in play. And, never mind the fact that, unless the black hats find the bug first--and so often these days, the white hats or the vendors themselves find things--there is anywhere from a week to two months' delay between when the bug is found and when exploits begin to be used in the wild.
So, when the OpenSSL bug landed in my lap and the first posted comment I saw was "Is there a package that replaces this?", my interest was piqued. I went to OpenSSL's news page and looked for security advisories. I found the current one (30 September), a small group surrounding the timing attack scheme back in March and April of this year, a buffer overflow bug fix from July/August of 2002. Prior to that, the only thing I saw marked as bug fix or security was posted in April of 2000--three years, three sets of bugs.
Amongst all this bug postings, I saw messages stating all the major vendors had published their own fixes, each in its inimitable fashion and in what I considered to be reasonable amounts of time. (As of this writing not everyone has checked in with OpenSSL, but it's been only a few hours since I first heard about it.) I dutifully went and did the little rain dance each of my different machines requires to update their packages and checked in with my boss to make sure the rest of the machines were receiving their appropriate penance. They all were, and nobody had posted any panicked messages of "I got hosed" or "exploit in the wild, look out". Outside of a groaning inbox, it was another quiet week, another routine security update.
Routine, that's a good word. Every good administrator has a routine: get a new machine, subscribe to the appropriate security mailing list, see a security update, read it, see if you have that package, grab the update, apply it, bounce the service if necessary and if the updater doesn't do so already. For what it's worth, Debian and SuSE usually do; Red Hat, Mandrake and Slackware usually do not. Your mileage may vary. I've had this routine since Red Hat 5, when I first had a Linux box on the live Internet, and I've yet to have a single machine I administer get cracked.
I know some people now are going to reach down, press Reply and say I'm not paranoid enough. But, I am reminded here of the philosophy behind PGP. PGP--or its GNU analog, GPG--is something a lot of us use every day. If you have an RPM-based system, you use it without even knowing about it, as most RPMs have a PGP/GPG signature. PGP, remember, stands for pretty good privacy--not Totally Secure Crypto or any other such absolute, but pretty good privacy. Today, PGP and similar systems, including OpenSSL and OpenSSH, are used worldwide to authenticate and protect various bits of data. Certainly, if somebody really wants your data, they can find a way to get enough CPU cycles to brute force the decryption and have your credit card number for lunch. Do we worry about it? No, not really. It's good enough.
So too, I think, is the effort to keep OpenSSL, OpenSSH and all of Linux updated good enough. Every piece of software is going to have bugs. As long as those bugs are reasonably few and squashed in a reasonable amount of time, and as long as the vendors keep up with things and don't keep us waiting weeks on end (at which point we can always resort to grabbing the maintainer's source), I'm content to let the Linux community keep on doing what it's doing--making and distributing the best operating systems in the world. I encourage you as a fellow Linux enthusiast or professional to do likewise. Don't panic. It's open source; it'll get fixed, quickly and well. When the day is done, the bits still flow and life goes on. My inbox thanks you.
Glenn Stone is a Red Hat Certified Engineer, sysadmin, technical writer, cover model and general Linux flunkie. He has been hand-building computers for fun and profit since 1999, and he is a happy denizen of the Pacific Northwest.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- ServersCheck's Thermal Imaging Camera Sensor
- The Italian Army Switches to LibreOffice
- Linux Mint 18
- Chris Birchall's Re-Engineering Legacy Software (Manning Publications)
- Petros Koutoupis' RapidDisk
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Privacy and the New Math
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide