Securing DNS and BIND
Our secure DNS service, trapped in its padded cell and very particular about what it says to whom, is shaping up nicely. But what about the actual zone databases?
The good news here is that since our options are considerably more limited than with named.conf, there's less to do. The bad news is that there's at least one type of Resource Record that's both obsolete and even dangerous, and must be avoided by the security-conscious.
Here's a sample zone file for the hypothetical domain “boneheads.com” (see Figure 4.)
The first thing to consider is the Start-of-Authority (SOA) record. In the above example, the serial number follows the convention yyyymmdd##, which is both convenient and helps security, as it reduces the chances of accidentally loading an old (obsolete) zone file—the serial number serves as both an index and a time stamp.
The refresh interval is set to three hours, a reasonable compromise between bandwidth conservation and paranoia. That is, the shorter the refresh interval, the less damage a DNS-spoofing (cache-poisoning) attack can do, since any “bad records” propagated by such an attack will be corrected each time the zone is refreshed.
The expiry interval is set to two weeks. This is the length of time the zone file will still be considered valid, should the zone's master stop responding to refresh queries. There are two ways a paranoiac might view this parameter. On one hand, a long value ensures that should the master server be bombarded with denial-of-service attacks over an extended period of time, its slaves will continue using cached zone data and the domain will continue to be reachable (except, presumably, for its main DNS server!). But on the other hand, even in the case of such an attack, zone data may change, and sometimes old data causes more mischief than no data at all.
Similarly, the Time to Live interval should be short enough to facilitate reasonably speedy recovery from an attack or corruption, but long enough to prevent bandwidth cluttering. (The TTL determines how long the individual zone's Resource Records may remain in the caches of other name servers retrieving them via queries.)
Our other concerns in this zone file have to do with minimizing the unnecessary disclosure of information. First, we want to minimize aliases (“A records”) and canonical names (“CNAMEs”) in general, so that only those hosts who need to be are present. (Actually, we want split DNS, but when that isn't feasible or applicable, we should still try to keep the zone file sparse.)
Second, we want to minimize the amount of (recursive) glue-fetching that goes on. This occurs when a requested name-server (NS) record contains a name whose IP address (via an A record) is not present on the server answering the NS query. In other words, if server X knows that Y is authoritative for domain WUZZA.com but X doesn't actually know Y's IP address, life can get weird: this scenario paves the way for DNS-spoofing attacks. Therefore, if you really want to eliminate all recursion (and I hope you do by now), make sure none of your Resource Records require recursive glue-fetching, and then set the “fetch-glue” option to “no”.
Finally, we need to use RP and TXT records judiciously if at all, but must never, ever put any meaningful data into an HINFO record. RP, or Responsible Person, is used to provide the e-mail address of someone who administers the domain. This is best set to as uninteresting an address as possible, e.g., “email@example.com” or “firstname.lastname@example.org”. Similarly, TXT records contain text messages that have traditionally provided additional contact information (phone numbers, etc.) but should be kept only specific enough to be useful, or better still, omitted altogether.
HINFO is a souvenir of simpler times: HINFO records are used to state the operating system, its version, and even hardware configuration of the hosts to which they refer! Back in the days when a large percentage of Internet nodes were in academic institutions and other open environments (and when computers were exotic and new), it seemed reasonable to advertise this information to one's users. Nowadays, HINFO has no valid use on public servers, other than obfuscation (i.e., intentionally providing false information to would-be attackers). In short, don't use HINFO records!
Returning to Figure 3, then, we see that the last few records are unnecessary at best and a cracker's gold mine at worst. And although we decided the SOA record looks good, the NS record immediately following points to a host on another domain altogether—remember, we don't like glue-fetching, and if that's the case here, we may want to add an A record for ns.otherdomain.com.
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
|Trying to Tame the Tablet||May 08, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Readers' Choice Awards
- Please correct the URL for Salt Stack's web site
7 min 47 sec ago
- Android is Linux -- why no better inter-operation
2 hours 23 min ago
- Connecting Android device to desktop Linux via USB
2 hours 51 min ago
- Find new cell phone and tablet pc
3 hours 49 min ago
5 hours 18 min ago
- Automatically updating Guest Additions
6 hours 27 min ago
- I like your topic on android
7 hours 13 min ago
- Reply to comment | Linux Journal
7 hours 34 min ago
- This is the easiest tutorial
13 hours 49 min ago
- Ahh, the Koolaid.
19 hours 27 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?