Paranoid Penguin: Taking a Risk-Based Approach to Linux Security
So, if firewalls aren't the panacea, what else must we do? Earlier in this column, I identified sloppy configurations as a major category of vulnerabilities; the flip side of this is that careful configurations are a powerful defense.
Suppose I've got an SMTP gateway in my DMZ that handles all e-mail passing between the Internet and my internal network. Suppose further that my organization's technical staff has a lot of experience with Sendmail and little time or inclination to learn to use Postfix, which I, as the resident security curmudgeon, consider to be much more secure. Management decides the gateway will run Sendmail. Is all lost?
It doesn't need to be. For starters, as I've stated before in this column, Sendmail's security record over the past few years actually has been quite good. But even if that changed overnight, and three new buffer-overflow vulnerabilities in Sendmail were discovered by the bad guys but not made known to the general public right away, our Sendmail gateway wouldn't necessarily be doomed—thanks to constructive pessimism on the part of Sendmail's developers.
Sendmail has a number of important security features, and two in particular are helpful in the face of potential buffer overflow vulnerabilities. Sendmail can be configured to run in a chroot jail, which limits what portions of the underlying filesystem it can see, and it can be configured to run as an unprivileged user and group, which minimizes the chances of a Sendmail vulnerability leading directly to root access. Because Sendmail listens on the privileged port TCP 25, it must be root part of the time, so in practice Sendmail selectively demotes itself to the unprivileged user/group—this is a partial mitigator, not a complete one.
Being chroot-able and running as an unprivileged user/group are two important security features common to most well-designed network applications today. Just as a good firewall policy aims for both prevention and containment, a good application configuration also takes into consideration the possibility of the application being abused or hijacked. Thus, the true measure of an application's securability isn't only about the number of CERT advisories it's been featured in, it also must include the mitigating features natively supported by the application.
The risk-based approach to security has two important benefits. First, rather than always having to say no to things, security practitioners using this approach find it easier to say “yes, if” (as in, “yes, we can use this tool if we mitigate Risk X, contain Risk Y” and so on). Second, by focusing not only on prevention of known threats but also by considering more generalized risks, the risk-based approach fosters defense in depth, in which layered controls minimize the chances of any one threat having global consequences (firewall rules plus chrooted applications plus intrusion detection systems and so on).
I hope you've found this discussion useful, and that it's lent some context to the more wire-headed security tools and techniques I tend to cover in this column. Be safe!
Mick Bauer, CISSP, is Linux Journal's security editor and an IS security consultant in Minneapolis, Minnesota. He's the author of Building Secure Servers With Linux (O'Reilly & Associates, 2002).
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?