Paranoid Penguin - Building a Transparent Firewall with Linux, Part I

Yes, you still need a firewall. How about a transparent one?

When I started writing this column in autumn of 2000, I had the day job of firewall engineer. I enjoyed that type of work, but I was enough of a big-picture kind of guy to be aware that every year, firewalls were becoming less important in the overall scheme of things. In fact, I was convinced that within a decade or so, firewalls would be nearly if not completely obsolete.

But, I was wrong! Firewalls aren't dying. They're evolving, and even though traditional firewall technologies probably achieve less than they did ten years ago, the threats we'd face if we did without them still justify the effort and expense of keeping them around.

So, I think the time is ripe for me to return to my roots, so to speak. This month, I kick off a series of articles I've been meaning to tackle for some time: how to build a transparent firewall using Linux. To begin, I set the stage by explaining why firewalls are still relevant in the first place and the difference between “routing” and “bridging” (transparent) firewalls.

What Firewalls Do and Don't Do

I was tempted to begin with a primer on firewall architecture and design, discussing the difference between the multihomed and “sandwich” topologies, rule design methodology and so forth. But my April 2007 article “Linux Firewalls for Everyone” does that just fine, and you still can read it on-line (see Resources). Instead, I'm going to talk about why firewalls are still useful.

An IP firewall, as opposed to an application firewall or XML gateway, inspects network traffic at the IP and TCP/UDP layers, possibly with some very limited amount of application intelligence (for example, being able to distinguish between FTP “get” and “put” commands, HTTP “post” vs. “get” and so forth). In actual practice, most packet-filtering and “stateful” IP-filtering firewalls evaluate traffic primarily based on source and destination IP addresses, and source and destination UDP or TCP ports.

When I started out in network engineering, just filtering traffic based on these few criteria seemed plenty powerful enough for most practical firewall applications, especially if the firewall was smart enough to track network traffic by session rather than by individual packet. (In olden times, it wasn't enough to program your IP-filtering firewall to allow outbound DNS queries on UDP port 53; you also had to put in a corresponding rule to allow DNS replies originating from UDP port 53. State-tracking, or “stateful”, firewalls automatically correlate packets to already-allowed sessions.)

Because in those days of yore different network applications all used different TCP and UDP ports—TCP port 23 for telnet, TCP and UDP ports 53 for DNS, TCP ports 20 and 21 for FTP and so forth—filtering by port number equated to filtering by application type.

This didn't mean firewalls could detect or prevent evil that might occur over an allowed source/destination/address/port combination. We had no illusions that the firewall could stop, for example, Apache buffer overflow attacks against a public Web server reachable from the entire world on TCP port 80. The firewall could (and can), however, prevent attempts to connect to that Web server via Secure Shell on TCP port 22, except perhaps from some internal, authorized access point.

The problem is that every year, we're less able to rely on the assumption that the things we should be worried most about will happen on ports that the firewall can block altogether. This is because so much of what people use networks for happens on only two TCP ports: TCP 80 (HTTP) and TCP 443 (HTTPS).

Even ten years ago, developers were racing to migrate from client-server application architectures, in which every network application used its own communication protocol, to the Web services model, in which there are really only two types of network transactions, Web sessions and database transactions. Well before then, people started figuring out ways to do practically everything that can be done over networks—from browsing a filesystem to running a remote desktop session—over HTTP using a Web browser.

But does this really mean that firewalls are obsolete? Definitely not, not even in contexts where Web servers are involved. Let's suppose it were true that all network traffic between two security zones happened over TCP port 443. By restricting traffic by source IP address and destination IP address, you still could make decisions about which hosts could initiate any transactions to any other given host.

If one filters traffic strictly based on source and destination IP addresses and, in practical terms, not by TCP/UDP port (service type), you may think that you haven't achieved much. All an attacker has to do in order to attack a protected system is gain access to some other system that the firewall allows to initiate transactions with your actual target. But, what if none of those “secondary” systems that the firewall considers trusted is externally reachable? If that's the case, your “crude” firewall rules may, in fact, have effectively mitigated the risk of remote compromise for that system.

This scenario is illustrated in Figure 1, which shows a firewall that sits in between two different networks, Zone A and Zone B, and the Internet. The firewall blocks all inbound traffic from the Internet to either zone, but allows hosts in Zone B to initiate transactions with hosts in Zone A. In this case, the firewall is highly effective in making it unfeasible for Internet-based attackers to exploit the trust relationship between Zones A and B, even if the firewall filters only on source IP address.

Figure 1. Filtering Only by Source IP Address

But, I'm not really advocating address-only filtering. The fact is there are still plenty of important network services that use ports and protocols besides TCP 80/443 and HTTP/HTTPS. Domain Name Services still use TCP and UDP 53; Microsoft Remote Desktop Protocol (Terminal Services) still uses TCP port 3389; Oracle still uses TCP port 1521; and so on. Modern firewalls still make plenty of meaningful decisions about what traffic to allow, based on service type.

Another argument against the usefulness of firewalls is the fact that malware (viruses, trojans, worms) has evolved from being mainly a nuisance to becoming a sophisticated means of infiltrating even well-secured networks. Ten years ago, the most likely impact of a virus or worm outbreak in one's organization was a disruption of service. Malware tended to strain system and network resources, but by its indiscriminate nature, it wasn't very useful for stealing data or breaching sensitive systems.

Nowadays, however, malware often is targeted at specific organizations by attackers who first go to great lengths to learn what data and systems to have their malware seek out, based on known (or highly probable) vulnerabilities on those systems. In other words, nowadays malicious hackers often deploy worms as “avatars” of themselves!

Such targeted malware is extremely difficult to detect, remediate against or trace back. Often, it's placed directly on target networks or systems by co-opted insiders. Therefore, firewalls frequently have little useful role in blocking the activity of targeted malware. It doesn't matter how thick your fortress walls are if your mailroom guys have been bribed to ignore the fact that the large package addressed to your king is ticking.

But I ask you, does the fact that bad guys may simply mail your king a bomb, mean that you can safely replace those expensive castle walls—the tuckpointing-bill for which in any given year is probably astronomical—with cardboard? I, for one, don't think you can.

Just because attackers are developing ever-more sophisticated tools doesn't mean they'll forget how to use the old ones. Remember the LAND attack from the late 1990s? It involved sending spoofed TCP packets bearing the same source IP address as the target to which you're sending them, causing a massive “reply to myself” sort of loop, which impairs system performance (potentially cripplingly). LAND was made obsolete by system patches and firewall protections—or so we thought. But in 2005, a security researcher named Dejan Levaja discovered that Windows Server 2003 and Windows XP SP2 were vulnerable to the LAND attack.

The fact that LAND attacks are (still) trivially blockable by firewalls is actually beside the point. What I'm really trying to illustrate is that no attack is truly obsolete for as long as it's still feasible and as long as systems are vulnerable to it. In theory, if your organization rigorously hardened every single computer connected to the network, if you took down your firewalls you might not get infected or attacked right away. But I guarantee you would, sooner rather than later, and not strictly by completely cutting-edge attack techniques.

So, to summarize, even if you think all your firewall does is block traffic from unexpected sources, it still provides meaningful protection. Modern network traffic does not, in fact, consist solely of HTTP and HTTPS; it still plays out over a surprisingly wide range of different ports and protocols. And, the rise in use of targeted malware and attacks against Web applications aren't arguments against firewalls; they're simply reasons that firewalls alone aren't sufficient to protect critical systems.

Can we agree, then, that firewalls are still an important tool in the network security toolkit? I hope so, because I'm about to spend several months showing you how to build a particularly clever type of Linux firewall: a transparent firewall.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState