We've all heard the stories about botnets and some emerging, professional tools to manage them in a business-like style, but many engineers probably have not had an opportunity to play with them or even research them completely.
Botnets and computer zombies are increasing dramatically. The ShadowServer Foundation continues to gather interesting statistics on this trend, showing how many botnets were found in the last two years (Figure 1).
The questions are simple. How can we be sure that no zombie computers exist on our network? Are patching, antivirus, anti-rootkit and antispam protections sufficient? Is something else is necessary? Can we really trust one leading security IT vendor? Would it be better to implement two? Should we exercise some other techniques?
Unfortunately, there are no easy answers to those questions. In March 2008, a security company called Damballa was the source of news that a new Kraken botnet existed in the wild and was far more resource-reaching than the Storm one. Damballa reported seeing approximately 400,000 compromised computers (victims)—some of them from at least 50 Fortune 500 companies. It's an interesting example, because many security (mostly antivirus) vendors responded quickly that they already had protection in place and that the threat was old, so no need to worry. Was this really a threat, and how did Damballa get these numbers?
To simplify the story, Damballa discovered (probably during a security audit) a new malware with hard-coded addresses (URLs) of control centers (CCs—computers that manage tasks for zombie machines and all infected computers report to them). Damballa also found that some of those hard-coded addresses were not registered in a DNS service (the botnet probably was tested at that time, and the authors were preparing to launch it later). Damballa registered those domains as its own and ended up controlling quite a large botnet for research. Now, Damballa could identify IP addresses of zombie computers that started to report to its CC, and it discovered a number of devices sitting inside large corporate networks. Damballa could play with the bots and discover their potential power for malicious activity.
Much discussion has ensued about Damballa's ethical behavior. It hasn't contacted any security company about the methods of infection it discovered. It hasn't published any details of the exploits used to any bugtrack, nor has it contacted any vendors to alert them of the issue. Damballa wanted all the credit itself.
I don't approve of those things, but as a security technologist, having the opportunity to research such botnets is really tempting, and I can understand (but still not agree with) those decisions. Having an army of zombies under the control of a security organization is much better than having them in the wild. On the other hand, Damballa allowed malware to spread undetected just to justify its research.
But, that's not the point. The real point is Damballa proved that undetected botnets could exist, even in highly secured environments, in companies that have dedicated resources to fighting malware.
So, if large corporations that have committed a small fortune to protect system and network resources can be vulnerable, who's safe? Apparently, having state-of-the-art antivirus and malware protection isn't enough. What can you do about it, and how should you protect your IT systems and fight undetectable malware?
One solution is something called Darknet.
The idea of Darknet isn't new. It evolved from honeypots—a solution that's undervalued and underestimated, although it's really easy to implement. The term Darknet refers to a private or public chunk of a network that is empty of any servers/services. In fact, there is at least one silent host on this network, catching and inspecting all packets. We can call it a silent honeypot. The idea is simple. We don't expect any traffic on this network, so any packet found here is not legitimate and needs to be analyzed.
As shown in Figure 2, the network has been divided into two parts with a /26 mask. The Darknet part consists of silent “traffic catchers” or Network Intrusion Detection Systems (NIDS).
There are plenty of sophisticated commercial Network Intrusion Detection Systems, but if you don't want to pay a lot of money, you can use some of the open-source and free solutions, such as Snort, Argus or even the fully functional Darknet solution from Team Cymru (see Resources). These tools allow you to gather detailed packets for analysis of new or zero-day exploits in the wild.
Figure 3 from the Team Cymru Web site shows how Darknet detected a worm just minutes after its release.
In this example, Darknet has a public address space, which means it will catch all the traffic from outside the network. So, we will have all the information about what threats are currently in the wild, and we will be alerted about new traffic patterns and potential zero-day exploits. But, how can we detect botnets inside our network? To answer that question, we need to look deeper into malware behavior.
About 90% of malware these days behaves in specific and common ways, so from the network traffic perspective, we can say that typical malware has some distinct characteristics:
It will assure its survival. It's not exactly network-related, but it will copy itself to the Start folder or add itself to startup scripts or the registry (Windows).
It will try to replicate and spread (infect other computers in its neighborhood) by searching for e-mail addresses and sending messages from a user's mailbox (mail channel); creating files on Windows shared folders, network drives and P2P shares (let's call that the P2P channel); or direct infections—using zero-day exploits on unpatched systems.
It will try to contact the control center (CC) to download other malware and to get instructions—usually from Web sites (Web channel) or Internet Relay Chat (IRC channel). Often these CCs are located on computers using dynamic IP addresses (dynamic DNS) or located in countries known to be sources of malicious software (China, Russia and so forth) or on suspicious networks (such as the so-called Russian Business Network).
It will be used for malicious purposes—typically spam (mail channel), data leakage/spyware/identity theft/phishing, DDOS, ransomware, often via the Web channel also.
As we can see, malware often uses the most popular channels to spread and operate—mainly Web, mail, P2P and IRC channels.
Knowing this information, we can create a Darknet inside our network and place some traffic catchers or IDS systems there to analyze and gather all suspicious data.
The method shown in Figure 4 can be explained in one sentence: “All outgoing traffic that is not legitimate (violates a company's policy) or traffic that is suspicious will be forwarded for analysis.”
One question remains. How do we decide what traffic is malicious or unwanted? The ultimate solution would be to forward all packets with an “evil bit” set in a funny way (RFC 3514). Unfortunately, this is a little more complicated.
Let's consider an example. If we have a company with internal mail and a name server (DNS/WINS), we can redirect all outgoing traffic (other than from these servers) to ports TCP 25 (SMTP), TCP/UDP 53 (DNS), TCP 6667-6669 (IRC) and all known P2P software (like Limeware) to Darknet hosts for analyses. As computers inside the network don't really send traffic directly to mail servers or connect to the IRC, we can block these channels to avoid spreading malware. If the nature of a company's business is focused on a local area or country, we also can redirect all WWW port TCP 80 requests to suspicious domains (such as .cn or .ru), dynamic DNS domains and so on.
To accomplish this task, we can set up basic iptables rules on a Linux firewall, as in this example (we are redirecting all requests coming from an internal eth0 interface destined for TCP 6669 IRC port to internal host 126.96.36.199):
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport ↪6669 -j DNAT --to 188.8.131.52:6669 iptables -A FORWARD -p tcp -i eth0 -d 184.108.40.206 --dport ↪6669 -j ACCEPT
We also will need to configure the internal server with address 220.127.116.11 to catch all the traffic. There are two ways to do that: we can record all the packets going to this server, or we can install some services (WWW, IRC, SMTP, POP3, DNS) and then monitor them for connections and integrity.
Let's focus on a simple packet-capture machine. More sophisticated solutions (such as the ones from antivirus companies) usually have a dozen machines (most likely VMware images) with different operating systems, open shares, Web servers, P2P clients, mail agents, instant-messaging clients and so on.
After the attack/infection, system changes will be compared to the input state (VMware snapshot) to analyze malware behavior and to ease the remediation process.
Such labs can be very complex, but to achieve basic functionality (traffic monitoring and threat alerting), it is enough to have one computer with your favorite Linux distribution.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released