Paranoid Penguin: Taking a Risk-Based Approach to Linux Security

Risk is inevitable. Be pessimistic about individual programs failing, make plans for handling and containing problems, and you'll keep your system as a whole secure.
Defense Scenario One: Firewall Policies

Now, let's begin matching these threats with defenses. This is where the risk-based approach becomes really important.

If you take an absolutist view of security, defense is simple. You choose software based not on the best combination of functionality, supportability and security, but solely based on security. Because security is your main software criterion, all you need to do is keep it patched, and all is well.

You probably also configure your firewall to trust nothing from the outside and to trust everything originating from the inside, because, of course, all outsiders are suspect and all insiders are trustworthy. In fact, software patches and firewall rules are so important in this view that practically nothing else matters.

And indeed, software patches and firewalls are important. But the degree to which we depend on patches and the way we use firewalls is somewhat different if we take the trouble to think about the real risks they're meant to address.

Consider the scenario I've sketched out in Figure 2. A firewall protects a DMZ network from the outside world, and it protects the internal network from both the outside world and the DMZ.

Figure 2. Simple Firewall Scenario

The firewall rules, shown in Figure 2 by the dotted lines, might look like this:

  1. Allow all Internal hosts to reach the Internet via any port/protocol.

  2. Allow all DMZ hosts to reach the Internet via any port/protocol.

  3. Allow all Internet hosts to reach the DMZ via TCP port 80 (HTTP).

  4. Allow the DMZ Web server to reach Internal hosts via TCP ports 1433 and 2000–65514.

On the face of it, this might seem reasonable enough. Internal users need to do all sorts of things on the Internet, so restricting that access is a hassle. The DMZ needs to do DNS queries for its logs, so why not give it outbound Internet access too? And there's a back-end application the DMZed Web server needs to access on the internal network that involves a database query on TCP 1433 plus a randomly allocated high port that falls within some finite range nobody's managed to document. So, the easiest thing to do is open up all TCP ports higher than 1999.

But let's consider three plausible risks:

  1. Internet-based attacker compromises Web server and uses it to attack other systems on the Internet.

  2. Worm infects internal system via an RPC vulnerability, and the infected system begins scanning large swaths of the Internet for other vulnerable systems.

  3. Worm infects the internal system and starts backdoor listener on TCP 6666. Attacker compromises Web server, scans firewall, detects well-known worm's listener and connects to the internal system.

In the first risk scenario, we've got an obvious legal exposure. If our Web server is compromised, and our firewall isn't configured to restrict its access to the outside world, we may be liable if the Web server is used to attack other systems. Restricting the Web server's outbound access only to necessary services and destinations mitigates this risk. In practice, a typical DMZed Web server should require few if any data flows to the outside world—its job is responding to HTTP queries from the Internet, not initiating Internet transactions of its own.

In the second scenario, we have a similar exposure, though the network performance ramifications are probably greater than the legal ramifications (all that scanning traffic can clog our Internet uplink). Again, a more restrictive firewall policy around outbound access trivially mitigates this risk.

The third scenario may seem a little more outlandish than the others—what are the chances of a worm infection on the inside and a Web server compromise in the DMZ both happening at once? Actually, they don't have to occur simultaneously. If the worm sets up its backdoor listener on TCP 6666 but then goes dormant, it may not be detected for some time. In other words, the Web server's compromise doesn't need to occur on the same day, or even in the same month, as the worm infestation if the infected system isn't disinfected in time. As with the other two scenarios, a more restrictive firewall policy mitigates this risk and minimizes the chance of the internal worm infestation being exploited by outsiders.

Besides being mitigated by more restrictive rules, these three risks have another important commonality. You don't need to predict any of them accurately to mitigate them. Rather, it's enough to think “what if my inbound firewall rules fail to prevent some worm or virus from getting in, and unexpected types of outbound access are attempted?”

I can't stress strongly enough that it's important not to focus exclusively on attack prevention, which is what inbound firewall rules do. It's equally important to think about what might happen if your preventative measures fail. In information security, pessimism is constructive, not defeatist.

I also hope it's clear by now that my point isn't that firewall rules are the answer to all your Linux risks. The point is that effective firewall rules depend on you considering not only known threats, but potential threats as well.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix