What is your patch management strategy?

Conficker seems to be the theme of the week. So, with the crisis abated for the moment, I thought this would be a good opportunity to discuss an issue near and dear to my heart – patch management.

Conficker will probably go down in history as one of the great worms. Not because of the damage it did to systems, but because of the cycles that were spun by computers and people to make sure they were not going to be, or ensure they had not been, infected. In terms of effectiveness, I would argue that Conficker was very effective. If you want to put it on the list of April Fools jokes as well, go right ahead.

What lead to the chaos in many companies and some federal and state agencies, was a simple lack of a cohesive patch management strategy. Conficker was a Windows penetration, exploiting a number of known, unpatched holes in an operating system that has millions lines of code, many of which have not been fully reviewed. But this does not mean that Open Source or other operating systems are not vulnerable. As has been noted, Linux and other Open Source code has fewer errors per line of code but that does not mean a simple misunderstanding in coding or a simple slipped comma cannot lead to any number of holes in any operating system, which is why we patch code when errors are discovered. Yet a number of companies and people still do not apply these patches.

As someone who used to manage patches for a rather large ERP system, I can understand, to a limited degree, the argument that we cannot apply any patch until we understand what damage it can do. This is a fair position to have, but most of the executives and others that take this position fail to realize that without a fully staffed and funded test and development environment, it is impossible to test and evaluate every patch that comes down the line, commercial or open source. Even in boom times, this was not a realistic position to hold – especially for what I would term routine patches.

The problems begin to crop up when patches are not deployed in a timely manner. This includes, but is not limited to the critical security patches. As a consumer, it is my responsibility to keep on top of patches and the impact it would have on my applications. When you include databases, application servers and OS patches, the task of managing patches can quickly become overwhelming to the point where you can fall behind very easily if you are not patching in a routine, timely manner.

When we look at Conficker, we find that not only had a patch been released to close the hole, but the anti-malware vendors had also released signatures to detect and clean the infection if discovered. And while there were some last minute updating going on for a new variant discovered forty-eight hours before c day, there is almost no excuse why so many people spend so many cycles running around and patching their systems.

So, if you have not applied the latest security patches to your systems, go and do it. Or find out what the holdup is, but do get it done.

______________________

David Lane, KG4GIY is a member of Linux Journal's Editorial Advisory Panel and the Control Op for Linux Journal's Virtual Ham Shack

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

patch management

markvcam's picture

patch management and deployment are two important parts of any company wide strategy and go hand in hand.

check out this article from GFI outlining the important of patch management and deployment for SMBs.

http://www.gfi.com/lannetscan/patch-management.htm

Curious

Stewart's picture

I'm curious to hear what others do in production networks as well. This is easy in a Microsoft world where patches come out once per month (with few exceptions). In a production Linux environment, where you are running RHEL3-5, CentOS3-5, and smatterings of Fedora systems, it is more complex with patches coming out all the time.

I run a group of 5 people that manages around 140 Linux systems from Fedora to RHEL and CentOS.

You have to take a best-practices approach on your systems to standardize your builds and minimize exposed services, or you'll spend all of your free time researching patches.

We patch dev/test stuff weekly and do a monthly release to the production systems unless something critical and exposed to the public comes up. Don't think we've ever had a patch that caused issues other than kernel patches and flaky vendor drivers for some Fibre Channel cards needing to be recompiled.

--

Brian11's picture

You might lump conficker in with other April 1 jokes only if you listen to the media about it. April 1 was the activation date, not the date it was going to destroy the internet. When conficker activated on April 1, it did exactly what was expected of it -- it woke up and downloaded something. That's it, and that's all that was expected. In that regard, it was also successful.

But yes, every company must have a patch strategy, and it's always good to be up to date.

@Anonymous:
I think you're probably a Linux desktop user. In the server space, one cannot just update everything, even though it's easy. And frankly, Linux servers are the issue here. The very small number of Linux desktops prevents them from being a real target.

In the server space, when your business relies on the servers being up, you most certainly need to review every patch, or at least every package that's updated, so you know what it's doing. This is not a "Windows" mindset, and in fact almost all Windows patches in the past few years have not been much to worry about, as MS has gotten a real good handle on how to roll them out now.

Um no, brian11, don't assume

Anonymous's picture

I run a department with over 300 Linux, Mac, and Windows desktop users, anchored by several Linux servers (file and printer sharing w/automatic download of printer drivers to Windows clients, Linux and Mac use CUPS so there is no need for downloads from the server, authentication, name services, firewalls, routers, and wireless access points w/RADIUS authentication and WPA2, some terminal services, automatic provisioning of Linux clients over the network) hosted on three machines. No virtualization because for Linux it's pointless; it run multiple services just fine. We use SELinux on the servers to prevent privilege escalation in the unlikely successful intrusion.

There is one windows 2003 server to provide a licensing server (which is ridiculous and annoying) and other Windows-only services for the Windows XP and Vista clients. That one Windows server and clients are big fat pains and consume a disproportionate amount of time and hardware resources. We absolutely have to test updates before deploying them because they break something frequently enough that testing is more cost-effective than trusting.

The Linux servers all run RHEL. I think we could move to CentOS and save the support costs because we have first-rate Linux admins (me and one other person.) But the few time we needed to call Red Hat they were on the ball and helped us quickly, so we'll probably stay with them. Before we moved to RHEL we used Debian stable. Never a worry about routine, automated updates, never a worry about racing to download the latest patches before the malware gets there first. The only Linux updates that I monitor are kernel updates, because we've had some problems with kernel updates breaking some device support, mainly wireless network interfaces.

That old nonsense about fewer Linux desktops is why they are not pummeled by malware shows that you are misinformed. Desktop or server, Linux is Linux and its share of every market segment except the desktop (which I think is very underestimated, since none of the mainstream analysts or research firms ever bother to include free-of-cost non-commercial deployments) is significant and in several arenas is more than Windows. Windows is the malware target of choice because it has everything a malware author wants: easy to exploit, impossible to secure, and popularity. It is simply not possible to secure Windows, while Linux and all Unix-type operating systems are very secure-able, and you don't need to waste tons of money, time, and hardware on third-party crud that is only partially effective and always reactive.

why all the drama?

Anonymous's picture

Thanks David. I don't understand all the drama that we see in so much of the (poor) reporting on these issues over keeping Linux systems up-to-date, because it is so easy reliable. All of the major distributions come with automatic update notifications and simple configuration options: update automatically without user intervention, notify and wait for user to click 'ok', let user review proposed updates first and pick and choose which ones to apply.

Both Debian and Red Hat, and their many offspring, come with tools for caching downloaded updates and packages locally, which speeds up mass updates considerably.

Like many Linux users, I've been applying automatic updates for years (back in the olden days we had to make cron jobs, we didn't have all this fancy new stuff!) without feeling like I had to test and review every single patch. That's just plain nuts. It's not worth the time and effort, because problems or failures are rare. That's a Windows mindset, and with Windows you do have to test every patch and update because failures and new problems are routine.

It's disgraceful that in this year of 2009 that Windows can still become compromised from email attachments and visiting Web sites. Conficker targets XP and Vista-- so much for MS' claims of progress :P

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix