View from the Trenches: Hitchhiker's Guide to a Bug Fix
Recently, a big flap has arisen over vulnerabilities in OpenSSH and a new bug in OpenSSL, two packages that are cornerstones on which secure access to Linux is built. Of course, these vulnerabilities concern folks; many of us depend on SSH or SSL sessions for our livelihood. We do remote administration, retrieve e-mail and even do such mundane and ordinary tasks as pay bills. If these services aren't secure, we as Linux people could have a hard time keeping peanut butter and jelly on our dinner tables.
So the first advisory comes out, and it's no big deal. Then comes a followup, and a third one, and somewhere in there people start asking, "Is there a replacement for this?" Well, yes, there are several replacements for OpenSSH, some that are compatible, some that aren't as compatible. Most are free, but some are not.
And then there's the question of how to get the published fix working on one's own machine. Do you grab the fix off the developer's Web site, compile it from source and put it in as a local, unpackaged system? Do you wait for your distribution maker to publish a fix? How long is an acceptable wait? Who publishes first? Somehow, the idea of an easy way to roll one's own package gets lost in the shuffle. Admittedly, it's nowhere near as easy as configure-make-make-install or apt-get this or rpm that, but that's what stealing a good script is for, right? Perhaps not.
A lot of energy is given to the idea that it must be absolutely secure all the time, and any delay is bad news. Never mind the fact that, speaking from personal experience, when your system becomes large enough, it's going to take you anywhere from 48 hours to a week to get the new package in the door, test it with your own system peculiarities, schedule the appropriate downtime window and give your customers ample notice. Only then can you install the fixes on your backup system and fail over to put the new code in play. And, never mind the fact that, unless the black hats find the bug first--and so often these days, the white hats or the vendors themselves find things--there is anywhere from a week to two months' delay between when the bug is found and when exploits begin to be used in the wild.
So, when the OpenSSL bug landed in my lap and the first posted comment I saw was "Is there a package that replaces this?", my interest was piqued. I went to OpenSSL's news page and looked for security advisories. I found the current one (30 September), a small group surrounding the timing attack scheme back in March and April of this year, a buffer overflow bug fix from July/August of 2002. Prior to that, the only thing I saw marked as bug fix or security was posted in April of 2000--three years, three sets of bugs.
Amongst all this bug postings, I saw messages stating all the major vendors had published their own fixes, each in its inimitable fashion and in what I considered to be reasonable amounts of time. (As of this writing not everyone has checked in with OpenSSL, but it's been only a few hours since I first heard about it.) I dutifully went and did the little rain dance each of my different machines requires to update their packages and checked in with my boss to make sure the rest of the machines were receiving their appropriate penance. They all were, and nobody had posted any panicked messages of "I got hosed" or "exploit in the wild, look out". Outside of a groaning inbox, it was another quiet week, another routine security update.
Routine, that's a good word. Every good administrator has a routine: get a new machine, subscribe to the appropriate security mailing list, see a security update, read it, see if you have that package, grab the update, apply it, bounce the service if necessary and if the updater doesn't do so already. For what it's worth, Debian and SuSE usually do; Red Hat, Mandrake and Slackware usually do not. Your mileage may vary. I've had this routine since Red Hat 5, when I first had a Linux box on the live Internet, and I've yet to have a single machine I administer get cracked.
I know some people now are going to reach down, press Reply and say I'm not paranoid enough. But, I am reminded here of the philosophy behind PGP. PGP--or its GNU analog, GPG--is something a lot of us use every day. If you have an RPM-based system, you use it without even knowing about it, as most RPMs have a PGP/GPG signature. PGP, remember, stands for pretty good privacy--not Totally Secure Crypto or any other such absolute, but pretty good privacy. Today, PGP and similar systems, including OpenSSL and OpenSSH, are used worldwide to authenticate and protect various bits of data. Certainly, if somebody really wants your data, they can find a way to get enough CPU cycles to brute force the decryption and have your credit card number for lunch. Do we worry about it? No, not really. It's good enough.
So too, I think, is the effort to keep OpenSSL, OpenSSH and all of Linux updated good enough. Every piece of software is going to have bugs. As long as those bugs are reasonably few and squashed in a reasonable amount of time, and as long as the vendors keep up with things and don't keep us waiting weeks on end (at which point we can always resort to grabbing the maintainer's source), I'm content to let the Linux community keep on doing what it's doing--making and distributing the best operating systems in the world. I encourage you as a fellow Linux enthusiast or professional to do likewise. Don't panic. It's open source; it'll get fixed, quickly and well. When the day is done, the bits still flow and life goes on. My inbox thanks you.
Glenn Stone is a Red Hat Certified Engineer, sysadmin, technical writer, cover model and general Linux flunkie. He has been hand-building computers for fun and profit since 1999, and he is a happy denizen of the Pacific Northwest.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide