Detecting Suspect Traffic
The Port Scan Attack Detector (psad) is a program written entirely in Perl and designed to work with restrictive ipchains/iptables rulesets in order to detect both TCP and UDP port scans. It features a set of highly configurable danger thresholds (with sensible defaults provided), e-mail alerting, optional automatic blocking of offending IP addresses via dynamic reconfiguration of ipchains/iptables firewall rulesets, and verbose alert messages that include the source, destination, scanned port range, begin and end times, dns and whois lookups. For iptables firewalls, psad makes use of the extended logging capability to detect highly suspect scans, such as FIN, XMAS and NULL, that are easily leveraged against a target machine via nmap. In addition, psad incorporates many of the TCP and UDP signatures included with the Snort intrusion detection system to detect scans and/or traffic to various backdoor programs (e.g., EvilFTP, GirlFriend, SubSeven) and DDoS tools (mstream, shaft). psad is free software released under the GNU Public License and is available from www.cipherdyne.com.
The basic task of psad is to make use of firewall log messages generated by either ipchains or iptables to detect suspect network traffic. To accomplish this task, psad needs a way of efficiently getting the data it needs from the log messages the firewall writes to syslog. Hence, upon installation psad creates a named pipe called psadfifo in the /var/log/ directory and reconfigures syslogd to write all kern.info messages to the pipe. In syslog parlance both ipchains and iptables log messages are reported via the kern facility at a log level of info. The bulk of the work done by psad is accomplished by two separate dæmons: kmsgsd and psad.
The kmsgsd dæmon is relatively simple and its sole responsibilities are to open the psadfifo pipe, read any kern.info messages that indicate ipchains/iptables has denied or rejected a packet and write all such messages to the psad data file /var/log/psad/fwdata. Since the regex included within kmsgsd looks only for packets that have been denied or rejected, the fwdata file provides psad with a distilled data stream that contains information that may be significant exclusively from a security perspective. However, this information is only as complete and verbose as the firewall is able to generate, hence the need for the most restrictive possible firewall ruleset.
iptables does not support a logging option for any rule that drops or rejects packets. An easy solution to this, though, is to precede any drop rule with a logging rule that uses the --log-prefix option. See rule number four of simplefirewall.sh. The kmsgsd code snippet in Listing 1 illustrates reading firewall messages from the psadfifo named pipe and using the regex to match any dropped packet messages generated by either ipchains or iptables.
Once the fwdata file is populated with packets that have been denied, it is the responsibility of the psad dæmon to analyze and make judgments about whether or not the packets constitute a port scan or other suspect network traffic. psad accomplishes this by periodically checking fwdata for new lines indicating that packets have been denied recently by the firewall. Port scans are assigned a danger level from one to five based on how many packets are denied within a fixed period of time. However, a scan may also be assigned a danger level depending on whether it matches any of the set of nasty signatures contained within the psad_signatures file. Examples of such signatures include “NMAP Fingerprint attempt” (for any packet that sets the URG, PSH, SYN and FIN flags) and “DDoS - mstream client to handler” (for a SYN packet destined for port 15104).
After a port scan reaches a predetermined danger threshold, an e-mail is sent that contains several pieces of information: the source IP from which the scan originated, the destination IP, the range of newly scanned ports (TCP or UDP) within the last scan interval, the complete range of scanned ports since the beginning of the scan (TCP and UDP), the start and end times, the danger level assigned by psad, reverse dns information, the TCP flags that were used in the scan (along with the corresponding nmap command-line options that would generate such as scan) and whois information.
Instead of appealing to the whois client installed by default on any Linux distribution, psad makes use of an excellent whois client written by Marco d'Itri. This client has the advantage of accurately querying the correct whois database for almost any scanning source IP address. psad also features the ability to reconfigure an ipchains/iptables ruleset to block any IP address that has reached a sufficiently high danger threshold. This feature is disabled by default, since many administrators do not want volunteer administrators across the Net having the ability to force the firewall to block access to any web site they choose by spoofing nasty packets from the web site's IP address.
Some administrators, however, like the ability to set a high threshold for a port scan or other nasty traffic and have psad block it automatically. Note that any traffic analyzed by psad has already been blocked by the firewall. But if a scan reaches a sufficiently high threshold, the auto-blocking feature will block all traffic from the offending source IP, since the administrator would probably not want such an IP sending any packets to the local network, legitimate or not.
psad also includes a signal handler such that if a USR1 signal is sent to the psad process, psad calls Data::Dumper to dump the contents of the %Scan hash to the /var/log/psad/scan_hash.$$ file, where the $$ represents the pid of the psad process. The %Scan hash is the main psad data structure and contains all scan information, and hence dumping its contents is useful both as a way to capture a record of scan data and as a debugging tool for extending psad's feature set. psad is a work-in-progress with much on the to-do list, but it is being developed actively and also has a relatively short version-release cycle.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Designing Electronics with Linux
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
2 hours 58 min ago
- BASH script to log IPs on public web server
7 hours 25 min ago
11 hours 58 sec ago
- Reply to comment | Linux Journal
11 hours 33 min ago
- All the articles you talked
13 hours 56 min ago
- All the articles you talked
14 hours 4 sec ago
- All the articles you talked
14 hours 1 min ago
18 hours 26 min ago
- Keeping track of IP address
20 hours 17 min ago
- Roll your own dynamic dns
1 day 1 hour ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?