Paranoid Penguin - Linux Security Challenges 2010

Security challenges and worries for 2010: we live in interesting times indeed!
Malware

Malware has been with us a long time, and some of the things that scare us now, like polymorphic code that alters itself to thwart signature-based antivirus methods, actually have been around a while. What's changed recently is the emergence of “targeted malware”: worms, trojans and viruses designed to attack specific parts of specific target organizations.

Targeted malware is probably the scariest new threat that we as security professionals and system/network administrators are faced with. By definition, it's always “zero-day”. You never can hope your antivirus software provider has signatures for code that not only has never been released into the wild, but that also won't necessarily even function against anybody's network and systems but yours. Targeted malware almost is never written from scratch. In fact, it's frequently generated using sophisticated, slick “malware construction” software written by the aforementioned highly skilled, highly paid malware authors of the underworld.

But although you might think there's some potential for detecting common characteristics between hostile applications targeting different organizations but originating from the same development tools, these tools are in fact specifically designed to write code that evades detection. In fact, at a recent security conference, a forensics specialist whose presentation I attended commented that it's not uncommon for his team to fail to isolate fully the source of attacker activity on a compromised network beyond identifying infected systems. Much of the code he encounters nowadays is too deeply embedded into other applications, DLLs and even the kernel itself to be identified and isolated easily.

Equally scary is how it's propagated. You may think that firewalls, application proxies and other defenses on your network's perimeter should minimize the chance for worms to penetrate your internal systems in the first place. You may even be correct. But frequently, targeted malware is installed directly onto one or more internal systems at a target site by either a corrupted insider or a crook who's obtained a job at the target organization for the specific purpose of placing the malware.

It's already hard enough to ensure proper physical security, OS-level access controls and application-level authorization controls for systems that handle or store sensitive data. But to do so uniformly across all systems or local networks that merely interact with such systems, and may have been compromised by malware, is a much bigger problem.

Furthermore, even if the back end is well secured, what about targeted malware that harvests data from end users? Your customer service representatives who handle customer account information may be perfectly trustworthy, but what if their systems become infested with keystroke loggers that transmit customer information back to some criminal's servers, over an SSL-encrypted network stream that's nearly indistinguishable from ordinary Web surfing? It's easy to imagine scenarios in which data handled by your organization's end users might be harvested by bad guys, if they were able to achieve even a small foothold on even one system in your internal network.

Is the targeted malware threat unstoppable? To some extent, yes. In practical terms, it's a particular type of insider attack, and insider attacks can never be prevented completely. The good news is we already know how to manage insider threats: background checks, system/application/employee monitoring, granular access controls at all levels, good physical security and so forth. The more broadly and consistently we apply these varied, layered controls, the less likely it will be that even a given targeted attack can succeed, and the more limited the scope of damage it is likely to cause.

Like so much else in security, it's a game less about preventing attacks, than of increasing the cost and effort required for such an attack to succeed.

Virtualization

And, now we come to virtualization, which both on its own and in tandem with cloud computing is the focus of so much buzz and hype. Virtualization has unquestionably altered the way we think about computers. By making the notion of “computing hardware” almost completely abstract relative to operating systems and applications, virtualization can free us from certain types of physical and even geographical limitations, or more accurately, it can shift those limitations to a different part of the resource planning process.

Perhaps overly idealistically, I used to think virtualization could free us from the “winner take all” phenomenon in operating system security. On any system under attack, attackers frequently need to find only one vulnerability in one application to compromise the entire system completely. But what if the most vulnerable application on a given server is the only network listener on that system?

Suppose I need to run an SMTP relay using Sendmail, and I normally also would run a network time protocol (NTP) dæmon, the Secure Shell dæmon (sshd) and RealVNC on that same system. That's four different attack vectors on one system. But, what if I run Sendmail in its own virtual machine on that host, allowing access to it from the outside world, and for the four other dæmons running on the underlying host, accept connections only from the IP address of some internal access point?

Sure, I could achieve a similar thing without virtualization by using TCP Wrappers or a local iptables policy. But if all dæmons run on the same system, and attackers gain only a partial foothold via Sendmail, perhaps resulting in nonroot remote access, the attackers may be able to attack one or more of the three other dæmons to attempt to escalate their privileges to root. But, if those dæmons are running on the virtual Sendmail machine's host system, and configured to reject connection attempts from the Sendmail virtual machine, that second attack will fail.

Unless, that is, our assumptions about virtualization don't hold. This brings me to the dark underbelly of virtualization, which in our headlong rush to maximize hardware resource utilization, I fear may not be under close enough inspection.

We assume that one virtual machine can't see or gain access to the resources (disk space, memory and so on) used by other virtual machines running on the same host. Virtual machines are supposed to be isolated by, among other things, a hypervisor or monitor program. We also assume that it isn't feasible or possible for any userspace application running on a guest virtual machine to speak directly to any process or resource on the underlying host.

If you write hypervisor code, there are strong incentives for you to maintain these assumptions and write a secure hypervisor. Pretty much anything that can subvert hypervisor security will have a negative impact on system performance, availability and overall reliability. For example, a bug that allows one virtual machine to access another's memory, while potentially calamitous if discovered by an attacker, is at least as likely to result in one virtual machine's impairing another's performance by unintentionally overwriting its memory.

But recent history has shown that both theoretical and demonstrable attacks are possible against popular system virtualization environments, such as VMware (see the link to Michael Kemp's presentation, in Resources).

Does this mean we shouldn't use virtualization? Of course not. This is a powerful and useful technology. But it's also very new, at least in many contexts in which we're deploying it nowadays, and until hypervisor security is better understood and more mature, I do think we should be careful about which virtual machines we run on the same host. It seems prudent to me to colocate only systems handling similar data and representing similar levels of risk (for example, Internet-reachability) on the same host system.

In other words, we probably shouldn't rely on hypervisors to protect virtual machines from each other, more than we have to.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix