Paranoid Penguin: Taking a Risk-Based Approach to Linux Security

Risk is inevitable. Be pessimistic about individual programs failing, make plans for handling and containing problems, and you'll keep your system as a whole secure.
Defense Scenario Two: Application Security

So, if firewalls aren't the panacea, what else must we do? Earlier in this column, I identified sloppy configurations as a major category of vulnerabilities; the flip side of this is that careful configurations are a powerful defense.

Suppose I've got an SMTP gateway in my DMZ that handles all e-mail passing between the Internet and my internal network. Suppose further that my organization's technical staff has a lot of experience with Sendmail and little time or inclination to learn to use Postfix, which I, as the resident security curmudgeon, consider to be much more secure. Management decides the gateway will run Sendmail. Is all lost?

It doesn't need to be. For starters, as I've stated before in this column, Sendmail's security record over the past few years actually has been quite good. But even if that changed overnight, and three new buffer-overflow vulnerabilities in Sendmail were discovered by the bad guys but not made known to the general public right away, our Sendmail gateway wouldn't necessarily be doomed—thanks to constructive pessimism on the part of Sendmail's developers.

Sendmail has a number of important security features, and two in particular are helpful in the face of potential buffer overflow vulnerabilities. Sendmail can be configured to run in a chroot jail, which limits what portions of the underlying filesystem it can see, and it can be configured to run as an unprivileged user and group, which minimizes the chances of a Sendmail vulnerability leading directly to root access. Because Sendmail listens on the privileged port TCP 25, it must be root part of the time, so in practice Sendmail selectively demotes itself to the unprivileged user/group—this is a partial mitigator, not a complete one.

Being chroot-able and running as an unprivileged user/group are two important security features common to most well-designed network applications today. Just as a good firewall policy aims for both prevention and containment, a good application configuration also takes into consideration the possibility of the application being abused or hijacked. Thus, the true measure of an application's securability isn't only about the number of CERT advisories it's been featured in, it also must include the mitigating features natively supported by the application.

Conclusion

The risk-based approach to security has two important benefits. First, rather than always having to say no to things, security practitioners using this approach find it easier to say “yes, if” (as in, “yes, we can use this tool if we mitigate Risk X, contain Risk Y” and so on). Second, by focusing not only on prevention of known threats but also by considering more generalized risks, the risk-based approach fosters defense in depth, in which layered controls minimize the chances of any one threat having global consequences (firewall rules plus chrooted applications plus intrusion detection systems and so on).

I hope you've found this discussion useful, and that it's lent some context to the more wire-headed security tools and techniques I tend to cover in this column. Be safe!

Mick Bauer, CISSP, is Linux Journal's security editor and an IS security consultant in Minneapolis, Minnesota. He's the author of Building Secure Servers With Linux (O'Reilly & Associates, 2002).

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState