Stopping DDOS Attacks
On Thursday, February 20, 2003, at about 0130 GMT, the popular LiveJournal site became the victim of a massive distributed denial-of-service attack. LiveJournal staffers and upstream providers first tried to filter by IP, but they soon discovered what the "D" in DDOS means. After blocking about one quarter of the IP addresses on the Internet, they got on their load balancer and implemented some unknown but effective measures (repeated e-mails to them went unanswered). I can only assume these measures included some quality of service/rate limiting methods. Despite continued flooding, the site returned to usability after about four days of being somewhere between slow and totally unreachable.
Being a paid LiveJournal subscriber myself, I roused myself from the storm of dark imprecations on the soul of someone who would try to destroy a site that has become the epitome of "on-line community" to wonder, what do you do about such an event?
In the absence of comment from LiveJournal, I asked Robert Dinse, head honcho of Eskimo.com (an ISP that dates back to when ! was part of an e-mail address and not a hint the e-mail might be spam), how DDOSes worked, and what he did about them. "The most common [attacks] are smurf and fraggle", says Dinse. "Smurf works by sending an ICMP echo request packet to a network with an open broadcast address. The packet has the source IP forged to be that of the target host. That causes every machine on the network to respond with an ICMP echo reply to the forged host IP address. Thus the network with an open broadcast address acts as an amplifier. Fraggle works the same as smurf except that it uses UDP echo request and echo reply." Naturally, having the source IP forged renders the attack untraceable by normal means.
The more sophisticated attacks involve viruses that infect machines and log themselves into an IRC channel to wait for attack instructions. There's a longish discussion on that topic here. In this case, the source IP addresses, if not spoofed, are those of several hundred semi-innocent zombie PCs. This was the attack perpetrated on LiveJournal's Apache/Linux web servers--loading them up with connection requests.
Technical means are available to prevent such attacks. Smurf and fraggle can be stopped by not allowing broadcast addresses to pass through your firewall. Additionally, do not allow packets that should be coming from inside actually come from outside. TCP SYN flood attacks can be stopped with SYN Cookies, a bit of cryptological magic on the TCP packet sequence number that is built into most IP stacks, including most default Linux kernels. Full-out connection requests can be rate-limited, either with mod_throttle in Apache or with iptables in the IP stack. Of course, if the sheer volume of packets coming down from your ISP clogs your bitpipe, you have to convince the ISP to put the filters on upstream traffic. This method is fine for a relatively small outfit like Eskimo, but what about their upstream? Most backbone providers, Dinse indicated, run their routers relatively close to capacity, and adding filters takes resources they are unwilling to give.
Okay, fine, that's the technical side of how to mitigate the effects of a DDOS. But what do you actually do about it; that is, how do you get back to the parties responsible and bring them to justice? Quoting Dinse again:
To chase down the originator requires [that] the attack be sustained long enough to contact someone at the network that was used for an amplifier, for them to get their backbone involved, and for the backbone to be willing to go from router to router interface by interface and trace it back, usually into another backbone or two or three.
The attacks are usually not sustained for long periods of time. The sites used as reflectors are more often than not large universities or corporations where it's very difficult to find out who is responsible. When you get them, more often than not they're not willing to chase it further. If they are their backbone, more often than not is not [at all]. If they are [willing], it usually runs into another network that is not.
In all of the years I've been providing internet service, we have never successfully chased a DDOS attack back to the origin, and it's not for lack of trying. [Author's emphasis mine.]
And that's only the legwork. What about getting the Feds involved? In the GRC.com case I mentioned above, the FBI was totally uninterested. Of course, this case occurred in May 2001. It seems that the new Department of Homeland Security is somewhat more interested in such things. Then again, this is the same outfit that sat on the recent Sendmail vulnerability for two+ months. Hackers everywhere are justifiably skeptical. A slide I found from the October 2002 meeting of the North American Network Operators' Group is particularly telling. It seems that the FBI is totally uninterested in technical solutions to the problem; they prefer to treat the symptoms, not actually track the miscreants back to their lairs. So any possibility of getting hold of one of the zombie PCs, dissecting the virus and finding the ultimately responsible party basically is nonexistent. And heaven forbid the perpetrator should turn out to be across some line on a map or a juvenile.
The answer to our question about what to do, then, seems to be "nothing". But we know in our heart of hearts that's not an acceptable answer, and we're hackers, therefore smarter than the average bear. So what is the answer? A UN commission? Street justice? A big foam clue bat?
Or something completely different? We fought spam for years and finally, amongst ourselves, came up with a number of fairly effective tools, including one (Bayesian analysis) that even AOL and Microsoft are implementing. These tools haven't cut network traffic yet, but AOL's tools were recently released, and Microsoft's are still in beta. (Us penguin-heads can Google for spambayes, ESR's bogofilter or grab Mozilla 1.3.) The solution in this case is simply to make it impractical to spam. Can it be that simply leaning on our upstreams to implement proper filtering will likewise make a DDOS impractical?
Glenn Stone is a Red Hat Certified Engineer, sysadmin, technical writer, cover model and general Linux flunkie. He has been hand-building computers for fun and profit since 1999, and he is a happy denizen of the Pacific Northwest.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- LiveCode Ltd.'s LiveCode
- Parsing an RSS News Feed with a Bash Script