Securing Your Network against Kazaa
When an iptables rule specifies QUEUE as a target, any packets matched by the rule are put into a queue for collection by an application such as ftwall. The program can then drop the packet or pass it back to Netfilter for further checking and forwarding. A typical rule for invoking this mechanism looks like this:
iptables -A FORWARD -p tcp -i eth0 -dport 123 \ -syn -j QUEUE
With this rule in place, all SYN packets from the network connected to eth0 and destined for port 123 on a remote host are passed to the program first. The program reads the packets and returns its verdict using the libipq library and ip_queue module.
QUEUE is a standard part of the iptables software delivered with most popular distributions. To verify that it is available on your system, type insmod ip_queue and check that no error message is displayed. For more details, see the Netfilter FAQ at www.netfilter.org/documentation/FAQ/netfilter-faq-4.html.
In order to explain the workings of ftwall, the description needs to go hand in hand with a partial explanation of FastTrack's connection logic. FastTrack connects to peers using three distinct approaches: a flood of UDP packets, parallel TCP/IP connections and a more traditional TCP/IP connection pattern. The software switches between modes if it believes it is being blocked. ftwall endeavors to keep clients running in the first mode for as long as possible, because this is the easiest to identify and allows a list of the peer addresses to be built up.
When a client starts, it sends large numbers of UDP packets through the firewall that are identifiable by their length and content. Netfilter queues these for processing by ftwall (Figure 1). Then, ftwall takes internal notes of the source and destination addresses of the packets and spoofs a reply to the client, thus preventing it from concluding that UDP packets are being blocked by the firewall and keeping it running in the first mode for a little longer.
The iptables rule to set up this queuing, assuming eth0 is the home network interface, is:
iptables -A FORWARD -p udp -i eth0 -j QUEUE
When FastTrack receives the spoofed reply, it tries to use UDP to request some extra setup information and then attempts to make a TCP/IP connection to the same address. These UDP and TCP packets are passed to ftwall, which now knows that the destination addresses refer to FastTrack, and so it drops them (Figure 2). Other UDP non-FastTrack packets and TCP/IP SYN packets are returned to Netfilter for further checks and forwarded to their destination.
The rule to queue SYNs to ftwall is:
iptables -A FORWARD -p tcp -i eth0 --syn -j QUEUE
The client repeats this UDP and SYN sequence for a while—usually (but not always) until all the addresses it knows about have been attempted at least once. This means that all these addresses now also are known to ftwall as ones that should be filtered.
After a while, the client changes tack and switches to the parallel TCP/IP connection logic with strong data packet encryption. ftwall continues to block connections to addresses it noted during phase one. For any other addresses, the only clue that identifies them as FastTrack connections is the high number of SYN packets seen over a short period. If ftwall relied solely on the UDP packets to do the blocking, it would be defeated, particularly if the client hadn't tried all its known addresses in the first phase. The solution to this problem is a time lock.
In this new mode, the client mixes TCP/IP connection attempts to addresses that ftwall already knows about with others that haven't yet been revealed (if there are any). ftwall keeps a note of the time when the most recent known address was attempted and blocks all TCP/IP connections from the same source IP address for a configurable time after this. Each SYN packet sent to a known address resets the timer. Provided these connections are attempted frequently enough, ftwall continues to block them.
This logic has the side effect that all TCP/IP connections from a rogue workstation are blocked while FastTrack is running there, including accesses to Web and FTP sites. It can be argued that this is acceptable because the user of the workstation is breaking the organization's policy. Once the client application is closed, the timer ceases to be refreshed, and TCP/IP connections will be allowed again once it has expired. This takes two minutes with the default configuration.
After FastTrack has been working in this mode for a while, it appears to come to the conclusion that the parallel style of connection attempt is causing a problem, and it switches to its third mode. Now it slows down the rate of connection attempts and uses the more traditional approach of trying one address at a time, with a few seconds of timeout on each one. This new approach frustrates the logic we have built so far, and the client eventually breaks through. This can take over an hour to achieve, but clients that don't reveal all known addresses early on stand a reasonable chance of establishing a connection during this phase. And once a single connection is established, a completely new set of addresses is downloaded, and we are no better off than we would have been if no blocking was employed in the first place.
To defeat this third mode, ftwall needs more information to allow it to determine whether FastTrack is still in use. One way it can do this is with a little more spoofing. From time to time, ftwall sends the client a UDP packet that is a copy of the one that the client itself uses to open communications with a peer (Figure 3). If the FastTrack software is running on the workstation, it replies with a packet that can be recognized easily, thus causing the lock timer to be reset. The relatively small number and size of these probe packets means the impact on network usage is minimal.
Because this packet is not for forwarding to a public address but destined for the firewall itself, an iptables rule in the INPUT chain is required to pass it to ftwall. The rule to use is:
iptables -A INPUT -p udp -i eth0 -j QUEUE
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script