Paranoid Penguin - Building a Transparent Firewall with Linux, Part II
Last month, I kicked off a series of articles on transparent firewalls, beginning with a brief essay on why firewalls are still relevant in an age of Web applications and tunneled traffic. I also explained the difference between a standard, routing firewall and a transparent, bridging firewall.
This month, I begin discussing actually building a transparent firewall. Making a firewall invisible to the network is cool already, but to spice things up even further, I'm going to show how to build a transparent firewall using OpenWrt running on a cheap broadband router. Let's get started!
I want to dive right into it, so I'm not going to review very much from last time. Suffice it to say for now that whereas a normal “routing” firewall acts as an IP gateway between the networks it interconnects, a “bridging” firewall acts more like a switch—nothing on either side of the firewall needs to define the firewall explicitly as a route to whatever's on the other side.
One important ramification of this is that with a routing firewall, the networks connected to each firewall interface need to be on different IP subnets. This means if you insert a firewall between different networks, those networks must usually at least be re-subnetted, if not re-IP-addressed altogether.
In contrast, the bridging firewall we're going to build in this series of articles won't require anything on your network to be reconfigured. At worst, you'll need to re-cable things to place the firewall in a “choke point” between the parts of your network you want to isolate from each other.
Suppose you want to use the transparent firewall on a home network to protect it from Internet-based attackers. In that case, you may want only two firewall zones, such as “outside” (the Internet) and “inside” (your home network). Most home users, it's safe to say, connect everything in their network directly to their DSL or cable modem via some flavor of 802.11 Wireless LAN (WLAN), with maybe one or two things connected to Ethernet interfaces on the same modem. Figure 1 shows a typical home network of that type.
If you're such a user, the first step in deploying a transparent firewall is to move everything off the DSL/cable modem (except, of course, the actual DSL or cable connection) and onto either the transparent firewall (if it has enough interfaces), an Ethernet switch (if you don't need WLAN), a “broadband router” (a WLAN access point with built-in Ethernet switch), or onto some combination of those things.
Step two, of course, is placing the transparent firewall between the DSL/cable modem and whatever device (or devices) to which you connected the rest of your network. Despite the list of options in the previous paragraph, there really are only two approaches to this: connecting all the devices in your network to the transparent firewall, which may be perfectly feasible if your firewall has enough interfaces and your network is small enough, or collapsing them back to one or more other network devices that are, in turn, connected to the firewall.
Figure 2 shows the latter approach. In Figure 2, the two wireless laptops and the wired network printer connect to a broadband router, whose “Internet” Ethernet interface is cabled to the “inside” interface of a transparent firewall. The firewall's “outside” interface is cabled to the Ethernet interface of a DSL or cable modem.
(If I was writing this in the 1990s, at this point, I would have to explain crossover cables. But in the modern era, in which pretty much all Ethernet hardware automatically detects “crossed-over” versus “straight-through” connections, all you should need are ordinary patch cables. If you did need crossover cables, however, they would be the two cables in Figure 2 connected to the firewall.)
Even though I'm about to explain why and how I'm using a Linksys WRT54GL broadband router as my transparent firewall platform, which boasts five Ethernet ports plus 802.11g WLAN, for simplicity's sake, I'm going to assume you're using a separate network device like the broadband router in Figure 2, at least for the time being. Although I reserve the right to cover other topologies in later installments of this series, the immediate task will be to build a simple two-interface firewall. (Why? Mainly because it will take too much space to explain how to set up wireless networking on the firewall.)
So, what will our two-port transparent firewall do? Mainly, it will protect the internal network from arbitrary connections from the outside world. In our test scenario of “basic home user”, there are no Web servers, SMTP relays or other “bastion hosts”. (As with WLAN-on-the-firewall, I may cover adding an “Internet DMZ” zone later on in this series.) The firewall will allow most transactions originating from the internal network, with a few exceptions.
First and arguably most important, we're going to configure the firewall to know the IP addresses of our ISP's DNS servers and allow only outbound DNS queries to them. This will protect us against “DNS redirect” attacks (though not highly localized attacks that redirect DNS to some other internal system, such as one where a WLAN-connected attacker's evil DNS server is sitting next to the attacker in a van outside your house).
Second, we'll enforce the use of a local Web proxy, such as the one I walked through building in my four-part series “Building a Secure Squid Web Proxy” in the April, May, July and August 2009 issues of Linux Journal (see Resources). In other words, our firewall policy will allow Web transactions to the outside world only if they originate from the IP address of our Web proxy. This will allow us to enforce blacklists against prohibited or known dangerous sites, and also to block the activity of any non-proxy-aware malware that may end up infiltrating our internal network.
Finally, we'll restrict outbound SMTP e-mail traffic to our ISP's SMTP servers, blocking any SMTP destined elsewhere. This also will provide a small hedge against malware activity.
Why not, you may wonder, allow all internally originated traffic through for simplicity's sake? That is a valid option and a fairly popular one at that. But, it contradicts Ranum's dictum: that which has not been expressly permitted is denied. Put another way, assume that the unexpected is also undesirable.
There's some simple math behind this dictum. Bad traffic can take an infinite range of different forms. “Known-good” traffic, for most organizations, tends to constitute a manageably short list. If you allow only the transactions you expect, and if you've done your homework on identifying and predicting everything you should expect, then other transactions are unnecessary, evil or both.
And, what on the inside, which is supposedly “trusted”, could cause unexpected transactions? Statistically speaking, probably malware—worms, trojans and viruses. Worms propagate themselves across networks, so by definition, they create lots of traffic. Trojans and viruses don't propagate themselves, but after they make landfall on a victim system (typically from an e-mail attachment, hostile Web site or by being hidden in some other application the user's been tricked into installing), they typically “phone home” in order to allow the malware's author to control the infected system from afar.
Traditionally, botnet agents used for spam propagation and Distributed Denial of Service (DDoS) attacks use the IRC protocol for command and control functions. That alone is a good reason to block all outbound IRC, but because IRC can use practically any TCP or UDP port, it isn't good enough to block TCP/UDP ports 194, 529 and 994 (its “assigned” ports). Besides, the malware could just as easily use some non-IRC protocol, again over completely arbitrary ports.
What if malware authors are clever enough to anticipate possible firewall restrictions, such that their code checks infected systems' local SMTP and Web-proxy settings? You still could block that malware if it tries to initiate Web transactions with some “known-evil” site on your Web proxy's blacklist. Regardless, security is never absolute. Good security involves taking reasonable measures to maximize the amount of effort attackers have to expend in order to succeed. Sadly, attackers will always succeed with enough effort, inside information and luck. (The good news is most attackers are opportunistic and lazy!)
Our firewall, therefore, won't allow us to be lazy about keeping our internal systems fully patched, educating our users against installing software from untrusted sources or visiting potentially nasty Web sites and so forth. But it will provide an important layer in our “security onion” that will make our network a less obvious target to attackers doing mass port scans against our ISP, and it will make it harder for any weirdness that does slip through to connect back out.
The last thing I'm going to say for now about our firewall design is that we won't have to worry about Network Address Translation (NAT) or DHCP. This, in fact, is one of the benefits of a transparent firewall! Whatever was providing NAT and DHCP services before (probably the DSL or cable modem, in our home-use scenario) can continue to do so, and if we place our firewall correctly, NAT and DHCP should continue working exactly the same as before.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Advanced Memory Allocation
- Tighter SSH Security with Two-Factor Authentication
- Tech Tip: Really Simple HTTP Server with Python
- <Watch> HD! Watch Walking On Sunshine Online Full Movie Streaming
- CentOS 6.8 Released
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- A Web-Based Linux Training Course
- Python Programming for Beginners
- Realfeel Test of the Preemptible Kernel Patch