Stop Waiting For DNS!

I am an impulse domain buyer. I tend to purchase silly names for simple sites that only serve the purpose of an inside joke. The thing about impulse-buying a domain is that DNS propagation generally takes a day or so, and setting up a Web site with a virtual hostname can be delayed while you wait for your Web site address to go "live".

Thankfully, there's a simple solution: the /etc/hosts file. By manually entering the DNS information, you'll get instant access to your new domain. That doesn't mean it will work for the rest of the Internet before DNS propagation, but it means you can set up and test your Web site immediately. Just remember to delete the entry in /etc/hosts after DNS propagates, or you might end up with a stale entry when your novelty Web site goes viral and you have to change your Web host!

The format for /etc/hosts is self-explanatory, but you can add comments by preceding with a # character if desired.

______________________

Shawn Powers is an Associate Editor for Linux Journal. You might find him chatting on the IRC channel, or Twitter

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Domain buyer

Rac's picture

I'm also addicted to domain names, thank you for this trick to speed up the launch of my future websites.

really

Deshaies's picture

Trick very useful, thank you but I find that the propagation time is really becoming shorter, right?

really

Deshaies's picture

Trick very useful, thank you but I find that the propagation time is really becoming shorter, right?

/etc/hosts + DNSmasq

Peter Countryman's picture

If you have a small network and an always-on server, you can easily set up your own DNS server with dnsmasq (http://www.thekelleys.org.uk/dnsmasq/doc.html, or try "apt-get dnsmasq" if you have a Debian-based distro). I got tired of managing a hosts file on each of my home machines. Now I just point my DNS to a local address. And you don't need to wait for changes to propagate - just reload or restart the dnsmasq service.

And "blackhole" sites you don't want to access

RO's picture

Most of my machines with Internet access get at least one entry added to /etc/hosts to block ad.doubleclick.net thus:

127.0.0.1 ad.doubleclick.net

Add any other sites you want ignored with the localhost IP, 127.0.0.1

Of course there are a number of sources on the net from which to download pre-determineds lists with 100's if not 1000's of such blackhole'ed sites for your surfing peace - BinGle for them.

Automate it, even

Steve Riley's picture

Here's a shell script I wrote that aggregates a few of the more popular blackhole sites into a single HOSTS file. I also change 127.0.0.1 to 0.0.0.0 because this bypasses the wait for the resolver to fail.

http://www.kubuntuforums.net/showthread.php?56419-Script-to-automate-bui...

And of course, one can use it

Anonymous's picture

And of course, one can use it to access the pirate bay in Belgium, by adding:

178.73.210.219 thepiratebay.se
178.73.210.219 www.thepiratebay.se
178.73.210.219 thepiratebay.org
178.73.210.219 www.thepiratebay.org
178.73.210.219 piratebay.se
178.73.210.219 www.piratebay.se
178.73.210.219 piratebay.org
178.73.210.219 www.piratebay.org

You can use one line various aliases

Anonymous's picture

178.73.210.219 thepiratebay.se www.thepiratebay.se thepiratebay.org www.thepiratebay.org

/etc/hosts FTW

Daniel Stavrovski's picture

besides using it when waiting for a DNS change to propagate over the Internet, I'm also using /etc/hosts for mapping my local virtual machines to a static local ip and it's great as I can easily access the virtual machines by their code-names.

I also have habbit to give names to my public machines that are not setup in the DNS zone file, so /etc/hosts come handy in this case as well.

take care,

- d

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix