Kernel Korner - The Hidden Treasures of iptables
takes the first four characters of /bin/ls, which is an ELF file that contains the string we want.
We can expand this example by declaring that we trust the content from 192.168.0.5 and, therefore, don't want to apply the filter to that server. This is done easily by adding an inverted match on the IP address, like this:
iptables -A FORWARD -i eth0 -p tcp ! \ -s 192.168.0.5 --sport 80 -m string \ --string '|7F|ELF' -j DROP
This example has a couple of problems that highlight the issues with the string match module. First, the rule matches any packet that contains this sequence anywhere in the data, not only at the start of the file. This means the rule could match false positives and block packets we didn't intend. Second, if the string we are looking for actually is split over two adjacent packets, it isn't matched. The module needs the entire string to appear in a single packet.
So, the string module is useful but basic. It doesn't allow for case-insensitive matches or for the location of the string to be specified, nor does it allow strings to be found when split over multiple packets in the data stream. There is plenty of scope for an extended version of this module to be written.
The mport extension allows a single rule to specify a number of port numbers and ranges using an extended syntax. Without mport, the iptables command can specify either a single port or a range of adjacent ports in a single command. With mport in place, the syntax allows more complex constructs. For example, we could permit X terminals, Web and mail with a single command, like this:
iptables -A INPUT -p tcp -m mport \ --dports 80,110,21,6000:6003 -j ACCEPT
Without using mport, this would have to be specified using four separate commands:
iptables -A INPUT -p tcp --dports 80 -j ACCEPT iptables -A INPUT -p tcp --dports 110 -j ACCEPT iptables -A INPUT -p tcp --dports 21 -j ACCEPT iptables -A INPUT -p tcp --dports 6000:6003 \ -j ACCEPT
Using a single rule in place of four offers a potential performance advantage because packets passing through the system require less processing. It also makes the maintenance of the rules files easier because services requiring identical processing can be grouped together easily. As you probably guessed, mport is short for multiple ports.
The time module allows rules to introduce the time of day and the day of the week into matching logic. Example uses would be to allow access to personal Web sites only during the lunch hour or to divert Web traffic to a secondary server during routine maintenance periods. The following example renders the Web service inaccessible between the hours of 4 and 6:30am on Fridays, presumably for system maintenance:
iptables -A INPUT -p tcp -d 80 -m time \ --timestart 04:00 --timestop 06:30 --days Fri \ --syn -j REJECT
It is worth noting that the -timestart, -timestop and -days options all must be specified. So if you want a rule that is not day-of-week dependent, you must specify all seven day names; you can't omit the option.
You really don't want to wander into a tar pit if you value your life or appreciate changes of scenery. They are nature's equivalent of fly paper; come too close and you won't leave in a hurry. The TARPIT component of iptables is the networking equivalent: if you are unwise enough to establish a TCP/IP connection to a port that is a tar pit, you will find it hard to close the connection and release the used system resources for future use.
To achieve this tar pit state, iptables accepts the incoming TCP/IP connection and then switches to a zero-byte window. This forces the attacker's system to stop sending data, rather like the effect of pressing Ctrl-S on a terminal. Any attempts by the attacker to close the connection are ignored, so the connection remains active and typically times out after only 12–24 minutes. This consumes resources on the attacker's system but not the Linux server or firewall running the tar pit. You could use the following iptables command to pass packets to the pit:
iptables -A INPUT -p tcp -m tcp -dport 80 -j TARPIT
You probably don't want to use conntrack and TARPIT on the same system, particularly if you anticipate catching a lot of flies with this particular brand of fly paper. Each stuck connection consumes conntrack resources.
One way to confuse potential attackers is to make your Linux system look like a Microsoft Windows machine by causing the netbios ports to respond to port scans. Then pass any connection requests to the tar pit. This has the effect of wasting attackers' time while they sense a possible opening and try to gain access. They will be frustrated by long timeouts and an apparently buggy target. Rules such as the following produce this result:
iptables -A INPUT -p tcp -m tcp -m mport \ --dports 135,139,1025 -j TARPIT
Another possibility is to TARPIT all ports except the ones you genuinely want to use. This again leads outsiders to see every port as open and waste time attempting to gain access. Moreover, a configuration like this prevents tcpdump from correctly determining the operating system running on the server. In this example, we allow Web and e-mail traffic and bog down everything else:
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT iptables -A INPUT -p tcp -m tcp -j TARPIT
You can find an interesting real-life story of how TARPIT and string helped one particular system administrator (not me) at www.spinics.net/lists/netfilter/msg17583.html.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Introduction to MapReduce with Hadoop on Linux
- Help with Designing or Debugging CORBA Applications
- New Products
- Returning Values from Bash Functions
- Linux Systems Administrator
- notifier shortcomings
23 min 34 sec ago
2 hours 25 sec ago
- Android User
2 hours 2 min ago
- Reply to comment | Linux Journal
3 hours 55 min ago
6 hours 44 min ago
- This is a good post. This
11 hours 57 min ago
- Great, This is really amazing
11 hours 59 min ago
- These posts are really good
12 hours 1 min ago
- It’s a really great site you
12 hours 3 min ago
- Beautiful ... I love your
12 hours 29 min ago