Designing a Safe Network Using Firewalls
A firewall can be your best friend; it can also be the cause of a lot of unforeseen problems. When should you consider placing a firewall into your network? And if you are sure you need one, where in the network do you put it? Too often, firewall policy results from a non-technical board meeting, on the basis of the chairman's “I think we want a firewall to secure our network” remark. An organization needs to think about its reasons for installing a firewall—what is it meant to protect against, and how should it go about doing its job? This article aims to clarify some of the issues that require consideration.
Although this question seems easy to answer, it is not. The security experts say a firewall is a dedicated machine that checks every network packet passing through, and that either drops or rejects certain packets based on rules set by the system administrator. However, today we also encounter firewalls running web servers (Firewall-1 and various NT and Unix machines) and web servers running a firewall application. It is now common practice to call anything that filters network packets a firewall. Thus, the word firewall usually refers to the function of filtering packets, or to the software that carries out that function—and less often to the hardware that runs the application.
It is by no means necessary to purchase specialized firewall hardware or even software. A Linux server—running on a $400 386 PC—provides as much protection as most commercial firewalls, with much greater flexibility and easier configuration.
A few years ago I attended a Dutch Unix Users Group (NLUUG) conference. One of the topics was “Using Firewalls to Secure Your Network”. After listening to a few speakers, I had not found a single argument that justified the necessity of a firewall. I still believe this is basically true. A good network doesn't need a firewall to filter out nonsense; all machines should be able to cope with bad data. However, theory and practice are two different things.
Unless you have incredibly tight control over your network, your users are likely to install a wide variety of software on their workstations, and to use that software in ways you probably haven't anticipated. In addition, new security holes are discovered all the time in common operating systems, and it's very difficult to make sure each machine has the latest version with all the bugs fixed.
For both of these reasons, a centrally-controlled firewall is a valuable line of defense. Yes, you should control the software your users install. Yes, you should make sure the security controls on their workstations are as up-to-date as possible. But since you can't rely on this being true all the time, firewalls must always be a consideration and nearly always a reality.
A few months ago, a small crisis arose in the Internet security world—the infamous “Ping of Death”. Somewhere in the BSD socket code, there was a check missing on the size of certain fragmented network packets. The result was that after reassembling a fragmented packet, the packet could end up being a few bytes larger than the maximum allowed packet size. Since the code assumed this could never happen, the internal variables were not made larger than this maximum. The result was a very nasty buffer overflow causing arbitrary code to be generated, usually crashing the machine. This bug affected a large community, because it was present in the BSD socket code. Since this code has often been used as a base for new software (and firmware), a wide variety of systems were susceptible to this bug. A list of all operating systems vulnerable to the “Ping of Death” can be found on Mike Bremford's page, located at http://prospect.epresence.com/ping/. A lot of devices other than operating systems were susceptible to this problem—Ethernet switches, routers, modems, printers, hubs and firewalls as well. The busier the machine, the more fatal the buffer overrun would be.
A second reason this bug was so incredibly dangerous was that it was trivial to abuse. The Microsoft Windows operating systems contain an implementation of the ICMP ping program that miscalculates the size of a packet. The maximum packet you can tell it to use is 65527, which is indeed the maximum allowed IP packet. But this implementation created a data segment of 65527 bytes and then put an IP header on it. Obviously, you end up with a packet that is larger than 65535. So all you had to do was type:
ping -l 65527 victim.com
Once this method was known, servers were crashing around the world as people happily pinged the globe.
As you can see from the list on Mike's page, patches are not available for all the known vulnerable devices. And even if they were, the system administration staff would need at least a few days to fix all the equipment. This is a situation where a firewall has a very valid role. If a security problem of this magnitude is found, you can disable it at the access point of your network. If you had a firewall at the time, most likely you filtered out all ICMP packets until you had confirmed that your database servers were not vulnerable. Even though not a single one of these machines should have been vulnerable, the truth is that a lot of them were.
The conclusion we draw from this experience is that the speed and power of response a firewall gives us can be an invaluable tool.
- Machine Learning Everywhere
- Smoothwall Express
- Bash Shell Script: Building a Better March Madness Bracket
- Own Your DNS Data
- Simple Server Hardening
- Understanding OpenStack's Success
- From vs. to + for Microsoft and Linux
- Ensono M.O.
- The Weather Outside Is Frightful (Or Is It?)
- Understanding Firewalld in Multi-Zone Configurations