Passive-Aggressive Resistance: OS Fingerprint Evasion
Having established that evasion does not mean security, we need to look at another aspect of this process, namely performance. Since a good evasion setup filters your traffic en masse, it is feasible that system performance will suffer. Obviously if you have a site that hosts web pages for 10,000 clients, performance is a bigger issue than if you simply have a Linux box set up somewhere for you and your friends to check e-mail and IRC. As an administrator, you need to decide which is the bigger reward for you (and your users), performance or privacy.
To illustrate the feasibility and relative ease of fingerprint evasion I have included a small sample user-space application (OSFPE) for Linux, which makes use of the Netfilter kernel modules [see Listing 1 at ftp.linuxjournal.com/pub/lj/listings/issue89/4750.tgz]. Through the use of such software as Netfilter in Linux, OS fingerprint evasion is becoming increasingly more efficient. Similar modifications and applications are sprouting up all over the place; in BSD it is possible to accomplish this task via ipfilter and a moderate amount of code (during the time of this writing ipfilter has been removed from the BSD CVS tree, sorry guys). Windows users (who are by far at the biggest disadvantage in this arena) are discovering ways to shim their TCP/IP communications, and with the inception of Libpcap for Win32, capture and forge their own packet responses.
Netfilter, as stated by its author, is “a framework for packet mangling”. Sounds fun, eh? Netfilter interfaces with the Linux kernel (kernels 2.4.x and above to be exact) and registers hooks for each protocol. If the proper rules are in place, these hooks capture incoming or outgoing network traffic that match specified rules. These packets are then processed and marked for either NF_DROP to have the packet dropped, NF_ACCEPT to accept the packet for normal processing on the stack or NF_QUEUE to have the packet queued for manipulation in user space. If the packet gets queued for manipulation in user space, the ip_queue driver places it in a queue; it is then handled asynchronously by any applications running in user space that have registered themselves for these types of packets. When these applications pull the packets from the queue they have the ability to manipulate, accept and reject the packets. If the packets are accepted, they are handed off to the next application running that has registered for such a packet. If the packet is flagged for NF_DROP, the packet is dropped and processing of that particular packet ceases. Through the use of Netfilter, applications in user space essentially have kernel-level control of network traffic.
iptables is an application used to interface with Netfilter to set, view and remove a system's current network filtering rules. I make mention of iptables here because in developing the proof-of-concept code we felt it was a better idea to introduce users to the iptables program for rule administration rather than having the application handle them. This will allow people to better understand what is going on with the packet queuing.
By taking advantage of the Netfilter modules and iptables rule administration program we were able to set up rules to capture incoming UDP, TCP and ICMP packets. Based on the incoming packets and the source host we either allow them to access the system normally or craft responses to appear as a Windows host, as defined in one of nmap's OS fingerprint entries. Here is the fingerprint we were attempting to match, and a brief walk-through on how we accomplished this goal:
TSeq(Class=TD|RI%gcd=1|2|3|4|5|8|A|14|1E|28|5A%SI=<1F4) T1(DF=Y%W=2017|16D0|860|869F%ACK=S++%Flags =AS%Ops=M|MNWNNT) T2(Resp=Y%DF=N%W=0%ACK=S%Flags=AR%Ops=) T3(Resp=Y%DF=Y%W=0%ACK=O%Flags=AR%Ops=) T4(DF=N%W=0%ACK=S++|O%Flags=R%Ops=) T5(DF=N%W=0%ACK=S++%Flags=AR%Ops=) T6(DF=N%W=0%ACK=S++|O%Flags=R%Ops=) T7(DF=N%W=0%ACK=S++%Flags=AR%Ops=) PU(DF=N%TOS=0%IPLEN=38%RIPTL=148%RID=E%RIPCK=E%UCK =E%ULEN=134%DAT=E)
The first line states that we need to build a time dependent (TD) TCP sequencing or one that has random increments (RI) equal to, but not greater than, 0x1F4 (500). This was actually pretty easy to accomplish, or match up I should say. First we grabbed the incoming packet, took the TCP sequence number, generated a psuedo-random number between 1 and 500 and added the values together. This met both the random increment and greatest common denominator (gcd) requirements for the fingerprint.
Next we broke down all the various packet tests (T1-T7) and created cases for them in our TCP handler. All of these are pretty straightforward and simply dictate how the host should respond to different types of packets to open and closed ports, the exact tests and their parameters are covered more in-depth in Fyodor's paper on remote OS detection.
Next we matched up our response for a UDP port-unreachable query. What nmap does here is send a UDP packet to a closed port on the host and wait for a response in the form of an ICMP port-unreachable packet. ICMP port-unreachable packets simply tell the querying host that the port to which they attempt to deliver a UDP message failed because there is no listening UDP service on that port. On some networks these messages never get sent back and are dropped at the router. In order to conform to the fingerprint we made an effort to send back what they were expecting.
Finally, as an extra little bonus we sent back Syn-Ack packets to the host for specific ports on our host if they happen to scan for these TCP ports as being open. Similarly, we sent back no response for particular UDP ports that we want to appear to be open on our host (as stated above, only closed UDP ports send back a port-unreachable message). When the scan of our host is complete, it should appear as though TCP ports 135 and 139 and UDP ports 135, 137 and 138 are open. If we attempt to fingerprint our host we should match up with the above-listed fingerprint and get the listing as “Windows NT4 / Windows 95 / Windows 98”.
As a final note, proof-of-concept code is just that, a little piece of programming used to prove a point. Do yourself a favor and don't run this on a critical device. Open it up, learn from it, modify it, exploit it, but don't depend on it. I've made an attempt to keep the code safe and somewhat readable (arguable), but I can't promise anything.
- Integrating Trac, Jenkins and Cobbler—Customizing Linux Operating Systems for Organizational Needs
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Remember Burning ISOs?
- New Products
- EdgeRouter Lite
- Returning Values from Bash Functions
- RSS Feeds
- Raspberry Pi: the Perfect Home Server