Paranoid Penguin - The Future of Linux Security
Did you know that I've been writing this column for the better part of five years? And what an action-packed five years they've been! In that time, we've seen some of Linux's biggest former competitors embrace it, and Linux has made significant inroads as a desktop platform.
In the realm of Linux security, there also have been remarkable advances. Linux's firewall functionality now is so mature that it's the basis for a number of embedded firewall appliances, not to mention countless non-security-related devices as well. Linux supports a staggering variety of security tools, making it a favorite among security auditors and consultants. In addition, Linux has formed the basis for several ultra-secure role-based access control (RBAC)-based operating systems, most notably the NSA's SELinux.
But what about the future of Linux security? I've written a lot about present and past Linux security issues but never about the future, aside from my interview with the forward-looking Richard Thieme. This month, I'd like to indulge in a little speculating and editorializing and talk about where I think Linux security will go and where I think it ought to go.
The revelation a lot of people have been having about Linux security lately is typical Linux systems are not that much more secure than are typical Microsoft Windows systems. Before the e-mail flames begin, let me explain this statement. First, personally, I do happen to think that Linux is more securable than Windows, and I've said so repeatedly in this very column over the years. Users simply have more control over their Linux systems' behaviors than they do with an equivalent Windows system.
The problem is Linux users, like Windows users, tend to focus most of their energy on getting their systems to do what they need them to do, and they place too much trust in their system's built-in or default security settings. Then, when the inevitable software bugs surface, those bugs' effects tend to be more extensive than they would have been had greater precautions been taken.
For example, if I run BIND v9 for name services, it takes some work and some research to get things working. It takes still more work to get BIND running in a chroot jail, so that the named process can see and use only a subset of the server's filesystem. Therefore, many if not most BIND users tend not to run BIND in a chroot jail. When a BIND vulnerability surfaces in the wild, the majority of BIND users probably experience more pain than if they'd done the chroot thing. It's probably the same amount of pain they would experience if they had run a Microsoft name server with fewer security features than BIND has.
All of this is simply to say that many of Linux's security features and capabilities are not taken advantage of by its users. The end result is, at least according to friends of mine who regularly do professional penetration testing, your average Red Hat Enterprise system isn't significantly harder to break in to than your average Windows 2003 Server system.
This is unfortunate and perhaps surprising. Given the complete transparency of its code base, Linux still seems to be prone to the same kinds of software bugs, in roughly the same quantity and frequency, as Windows. But if you think about it, why wouldn't this be so? As with Windows, Linux represents an amazingly complex mass of code produced by hundreds of different people. The more code there is, the more bugs there may be, right?
I recently was interviewed by SearchSecurity.com for an article about a Microsoft-funded study conducted by Security Innovation, Inc. The study concluded that Windows is more secure than Linux, a conclusion based mainly on frequency of security bugs and mean time to issue patches. I believe I correctly criticized the study for looking only at these easily quantifiable aspects of security and not taking into consideration Linux's other security advantages, such as customizability and greater choice of software packages. In other words, I felt the study had the most relevance when comparing default installation scenarios, irrespective of each OS' potential for being secured by its users.
But the more I think about it, the more I worry that perhaps a platform's security potential doesn't count unless most systems running that platform actually reach that potential. This isn't strictly a function of end-user behavior; I'm not trying to impugn system administrators. As I elaborate later, I think Linux's developers and distributors must continue to figure out ways to make security features more ubiquitous, transparent and easy to configure and use. By the way, because I'm comparing Linux with Windows, in fairness I should point out that Windows too has many security features that its users often do not take advantage of.
Okay, Linux and Windows both are much less secure by default than they could be, and both are subject to an unwinnable race between software bugs and security patches. What else are we up against?
Alas, both operating systems use a rather primitive discretionary access control model in which entire categories of security settings and behaviors are optional. In this model, one superuser account—root in Linux, Administrator in Windows—has god-like power over the entire system, including other users' files. In both OSes, group memberships can be used to create different levels of access, say, to delegate various root powers. In practice, however, on most systems you have to be logged on as the superuser or temporarily become that user in order to do anything important.
As a result, gaining complete control over any Linux or Windows system is a matter of compromising any process running with superuser privileges. But wait, you say, I've configured my important dæmons to run as unprivileged users; bugs in those dæmons can't lead to total compromise, can they? No, not directly, but bugs in other software may make it possible for a non-root process to escalate its privileges. For example, suppose you've got a Web server running Apache, and one day an attacker manages to exploit an unpatched Apache buffer overflow vulnerability that results in the attacker getting a shell session on your server. At this point, the attacker is running as www, because that's the user Apache is running as. But suppose further that this system also has an unpatched kernel vulnerability that involves local privilege escalation.
You, the system administrator, may even know about this vulnerability but have opted not to patch it, because after all, it's strictly a local vulnerability, and nobody besides you has a shell account on this system, and who wants to have to reboot after patching the kernel? But now a remote attacker does have local shell access, and if she successfully exploits this kernel vulnerability, she's root! This all-too-common scenario illustrates that bugs are bad enough, but they're even worse when combined with a root-takes-all security model.
This, in a verbose nutshell, is the present state of Linux security. Securing Linux requires us to expend considerable effort to take full advantage of sometimes-complicated security features that usually are not enabled by default, to keep absolutely current on all security patches, and to do all of this within the limitations of Linux's simple security model. But we're in good company: most commonly used contemporary operating systems have exactly the same limitations and challenges.
I've alluded to the fact that access controls or file permissions on Linux, UNIX in general and Windows are discretionary, and that this is a weak security model. Well, what about SELinux? Doesn't that use RBACs and type enforcement (TE), both of which are examples of mandatory access controls? Yes, indeed, it does. But I'm afraid that this probably isn't the future of Linux security, for the same reasons that SELinux isn't a huge part of present Linux security.
RBACs restrict users' behavior and access to system resources based on carefully defined roles that are analogous to but more far-reaching than the conventional UNIX groups mechanism. Similarly, type enforcement restricts processes' activities based on their predefined domains of operation. The net effect of RBAC and TE is to create segregated silos (my term) in which users and processes operate, with strictly limited interaction being permitted between silos.
This is an elegant and effective security model. However, for most people, RBAC, TE and other mandatory access controls are too complicated and involve too much administrative overhead. This, in many people's view, dooms SELinux and similar operating systems to the realm of niche solutions: OSes that are useful to people with specific needs and capabilities but not destined for widespread adoption. Despite admiring SELinux's security architecture and being a fan of the concept of RBAC in general, I do not think that mandatory access controls by themselves are likely to revolutionize Linux security.
If RBAC and TE do in fact prove too unwieldy to compartmentalize security breaches at the OS level, hypervisors and virtual machines (VMs) may achieve this at a higher level. We're already familiar with virtual machines in two different contexts: runtime virtual environments, such as those use by Java programs, and virtual platforms, such as VMware, plex86 and VirtualPC, that allow you to run entire operating systems in a virtualized hardware environment.
The Java Virtual Machine was designed with particular security features, most notably the Java sandbox. In general, though, Java security comes from the fact that Java applets run isolated from raw or real system resources; everything is mediated by the Java Virtual Machine. Besides being a good security model, it's also relatively simple to use safely, both for programmers and end users. Java also is, for many reasons, already ubiquitous.
Virtual platforms take this concept a step further by mediating not only individual programs but the operating systems on which they run. Security architecture in this scenario, however, isn't as mature as with the Java Virtual Machine. For the most part, security is left to the guest operating systems running in the virtual environment. A SUSE Linux virtual machine running on VMware, therefore, is no more or less secure than a real SUSE system running on its own hardware.
Hypervisor technology addresses the need to isolate virtual machines running on the same hardware from one another, restrict their interactions and prevent a security breach on one virtual machine from affecting others. IBM has created a security architecture called sHype for hypervisors. An open-source hypervisor/virtual-machine project called Xen also is available.
Although the driving purpose of a hypervisor is to prevent any one virtual machine from interfering with other virtual machines running on the same hardware—for example, by monopolizing shared hardware resources—the idea of having some sort of intelligence managing systems at this level is powerful. It may even have the potential to overshadow or, at the very least, significantly augment traditional intrusion detection systems (IDSes) as a means of detecting and containing system compromises.
Mandatory access controls and hypervisors/virtual machines aren't mutually exclusive. On the one hand, I am of the opinion, strongly influenced by my friend and fellow security analyst Tony Stieber, that hypervisors have much greater potential to shape the future of Linux security than do MACs. But on the other hand, the two can be used together. Imagine a large, powerful server system running several virtual machines controlled by a hypervisor. One VM could be running a general-purpose OS, such as Linux, serving as a Web server. Another VM, serving as a database for sensitive information, could run a MAC-based OS such as SELinux. Both VMs would benefit from security controls enforced by the hypervisor, with SELinux providing extra levels of security of its own.
One additional technology, like MACs and hypervisors, already exists today but potentially will have a much bigger impact on the future: the anomaly-based intrusion detection system. The idea of anomaly-based IDS is simple: it involves creating a baseline of normal network or system activity and sending an alert any time unexpected or anomalous behavior is detected.
If the idea is simple and the technology already exists, why isn't this approach commonly used? Because it isn't nearly as mature or easy to use as signature-matching. We're all familiar with signature-based IDSes; they maintain databases of attack signatures, against which observed network packets or series of packets are compared. If a given packet matches one in the attack database, the packet is judged to be part of an attack, and an alert is sent.
The strengths of this approach are that it's easy to use and typically involves few false positives or false alarms. The fatal weakness of signature-based systems is if an attack is too new or too complicated for there to be a corresponding signature in your IDS' signature database, it is not detected.
With anomaly-based IDS, in contrast, any new attack that sufficiently differs from normal behavior is detected. The trade-off is the IDS administrator must train and periodically re-train the IDS system in order to create the normal-behavior baseline. This results in a period of frequent false positives, until the baseline has been fine-tuned.
I attended a lecture by Marcus Ranum in 1999 or so in which he described anomaly-based systems as the future of IDS. Obviously, we're not there yet. Such products are available from vendors such as Lancope and Arbor Networks. But I remain hopeful that someone will figure out how to do this sort of thing in ways that are cheaper and easier to use than current systems. Potentially, this could lead to a sort of network hypervisor that lends the same sort of intelligence to networks, whether comprised of virtual or real machines, that hypervisors lend to virtual platforms.
By the way, virus scanners need and can benefit from anomaly detection technology as much as IDSes do. This point is illustrated amply by the fact that the vast majority of organizations that use modern virus scanners, which rely almost exclusively on virus-signature matching, nonetheless suffer from major virus/trojan/worm outbreaks. Current signature-based antivirus tools clearly are not effective enough.
So those are my thoughts on the future of Linux security. In the mean time, keep on using the techniques this column has focused on over the years: firewalls, virus-scanners, automatic-patch/update tools, VPNs and application-specific security controls such as chroot jails and audit trails.
With that, I bid you farewell, not only for this month but indefinitely. It's time for me to focus on other things for at least a little while and allow fresh voices take over the Paranoid Penguin. I'm continuing in my role as Security Editor and in that capacity will keep on doing my bit to help Linux Journal bring you outstanding security content. I also will try to contribute an article now and then myself, on an ad hoc basis. But the article you are reading now is my last as exclusive author of this column.
Thanks to all of you for five years of support, encouragement and edification—I've never made a mistake in this column that wasn't noticed and corrected by someone out there and always to my benefit. It's been a great five years, and I'm grateful to this terrific magazine's staff and readers alike for all you've done for me!
Resources for this article: /article/8329.
Mick Bauer, CISSP, is Linux Journal's security editor and an IS security consultant in Minneapolis, Minnesota. O'Reilly & Associates recently released the second edition of his book Linux Server Security (January 2005). Mick also composes industrial polka music but has the good taste seldom to perform it.