Practical Unix and Internet Security, Second Edition
Title: Practical Unix & Internet Security, Second Edition
Authors: Simson Garfinkel and Gene Spafford
Publisher: O'Reilly & Associates, Inc. 1996
Price: US $39.95, CAN $56.95
Reviewer: Dan Wilder
Practical Unix & Internet Security is the much revised and enhanced second edition of O'Reilly's familiar Practical Unix Security. This book amounts to a survey course that covers everything from the basics to the advanced topics, not always to equal depth, but more than well enough to get you started in the right direction when you need to tackle a new topic or one you are a little rusty on. You may need additional resources. The 47 pages of well-annotated bibliography and resource lists found at the end should help greatly with further reading, contacts and so on.
The emphasis is on understanding principles. To the reader is left the exercise of implementing an appropriate and well-designed security strategy. This is good, for there are not likely to be many cut-and-dried approaches of any generality worth their salt. A useful strategy may vary a lot from site to site, and from time to time. No one book could even hope to cover in great detail all situations and all varieties of Unix. This book sets out to furnish the broad background and the perspective necessary to such an undertaking. I believe the authors have risen well to their task. Notwithstanding the overall very general approach, many detailed examples of procedures are given, to illustrate and anchor the discussion, and to give the Unix novice a place to start on this rather complex topic. Too many books about computing never descend to a concrete plane, and their lessons may be lost on just those who could benefit most. This book avoids that pitfall. Indeed, the experienced reader will likely skip many of the examples, while appreciating the insights and points of interest found between them. The beginning of the book talks about Unix basics: the file system, permissions, devices, users and passwords. Much of these several chapters is spent talking about organizational issues as well. This is appropriate, as Unix security is not merely a technical issue, but has a substantial social dimension. Accordingly, policy, history and risk assessment are treated briefly here. The common sense approach taken is exemplified by this passage (page 44):
The key to successful risk assessment is to identify all of the possible threats to your system, but only to defend against those risks which you think are realistic threats. Chapters on advanced topics include:
RPC, NIS, NIS+ and Kerberos
Wrappers and Proxies
Secure SUID and Network Programs
as well as a whole section on handling security incidents. The appendices have some very nice security checklists.
I found most of the information presented accurate and up-to-date, though of course not always complete. There were some exceptions. For example, in the chapter on UUCP (page 421):
UUCP was designed and optimized for low-speed connections. When used with modems capable of transmitting at 14.4 Kbps or a faster rate, the protocols become increasingly inefficient.
The authors are perhaps correct about historical UUCP, which unfortunately represents what is currently shipping from many vendors. A modern UUCP, such as the Taylor UUCP present in most Linux distributions, will give even the fastest competing file transfer methods a run for their money, over the same connection. If it were not for the negative tone about UUCP the authors take in this chapter, I'd chalk this one up as information that just didn't make the cut. In a later chapter, while offering alternatives to the expense of installing and maintaining a firewall, the authors finally touch on the tip of the UUCP iceberg (page 668):
Use a hard-wired UUCP connection to transfer email between your internal network and the Internet. This connection will allow your employees to exchange email with other sites for work-related purposes, but will not expose your network to IP-based attacks.
Bingo! This is one of several reasons why many businesses and individuals in the Seattle area, and no doubt elsewhere, use UUCP for some or all of their e-mail service. Too bad it didn't rate mention back where they were discouraging us from even considering UUCP. An amusing comparison of Linux and the GNU utilities with some others is found on page 704. The authors cite a study using a program called “fuzz” in which Unix utilities crashed when presented with random inputs. Over a quarter of standard Unix utilities crashed, while less than a tenth of Linux (mostly GNU) utilities tested did so. Though all the commercial vendors were presented with the results of these tests, a re-test some years later gave similar results. While as much as one utility in ten is still pretty high, it is a testimony to free software and GNU in particular that the levels attained are significantly lower than those in commercial systems. The implication drawn, however, is less amusing. The authors point out, correctly, that many of the same problems that will make a program crash on random input, will allow a skilled attacker who is adept at exploiting the mechanisms of the crashes, such as buffer overflows and array bounds violations, to obtain behavior from a program not anticipated by its authors or installers. Woven through this work are discussions of software quality, an issue dear to my heart. Garfinkel and Spafford touch on these in the introduction (pages 17-18): @quote:[ ... ] software designers are not learning from past mistakes. For instance, buffer overruns ... have been recognized as a major Unix problem for some time, yet software continues to be discovered containing such bugs, and new software is written without consideration of these past problems [ ... ]
A more serious problem than any particular flaw is the fact that few, if any, vendors are performing an organized program of testing on the software they provide ... few apparently test their software to see what it does when presented with unexpected data or conditions.
In the chapter “Writing Secure SUID and Network Programs” they spend much more time on this theme. Lists of good, basic, common-sense rules are found there, such as “Don't use routines that fail to check buffer boundaries when manipulating strings of arbitrary length.” Violations of this rule alone have resulted in several CERT advisories, including, I suspect, a very recent advisory concerning a popular e-mail transfer program. Many other guidelines found in this chapter could in the past have prevented a number of serious breaches of security. For example, “check all return codes from system calls” and “Using the access() function followed by an open() is a race condition, and almost always a bug.” Perhaps one of these guidelines will help me, some day soon. I think I'll put this book on my night stand, for evenings of enjoyable late-night study as the rainy season moves in.
Dan Wilder writes and enjoys the rain in Seattle, Washington. You may reach him via email to firstname.lastname@example.org.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Technical Support Rep
- Validate an E-Mail Address with PHP, the Right Way
- Senior Perl Developer
- UX Designer
- Speed Up Your Web Site with Varnish
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?