Designing a Safe Network Using Firewalls
A firewall can be your best friend; it can also be the cause of a lot of unforeseen problems. When should you consider placing a firewall into your network? And if you are sure you need one, where in the network do you put it? Too often, firewall policy results from a non-technical board meeting, on the basis of the chairman's “I think we want a firewall to secure our network” remark. An organization needs to think about its reasons for installing a firewall—what is it meant to protect against, and how should it go about doing its job? This article aims to clarify some of the issues that require consideration.
Although this question seems easy to answer, it is not. The security experts say a firewall is a dedicated machine that checks every network packet passing through, and that either drops or rejects certain packets based on rules set by the system administrator. However, today we also encounter firewalls running web servers (Firewall-1 and various NT and Unix machines) and web servers running a firewall application. It is now common practice to call anything that filters network packets a firewall. Thus, the word firewall usually refers to the function of filtering packets, or to the software that carries out that function—and less often to the hardware that runs the application.
It is by no means necessary to purchase specialized firewall hardware or even software. A Linux server—running on a $400 386 PC—provides as much protection as most commercial firewalls, with much greater flexibility and easier configuration.
A few years ago I attended a Dutch Unix Users Group (NLUUG) conference. One of the topics was “Using Firewalls to Secure Your Network”. After listening to a few speakers, I had not found a single argument that justified the necessity of a firewall. I still believe this is basically true. A good network doesn't need a firewall to filter out nonsense; all machines should be able to cope with bad data. However, theory and practice are two different things.
Unless you have incredibly tight control over your network, your users are likely to install a wide variety of software on their workstations, and to use that software in ways you probably haven't anticipated. In addition, new security holes are discovered all the time in common operating systems, and it's very difficult to make sure each machine has the latest version with all the bugs fixed.
For both of these reasons, a centrally-controlled firewall is a valuable line of defense. Yes, you should control the software your users install. Yes, you should make sure the security controls on their workstations are as up-to-date as possible. But since you can't rely on this being true all the time, firewalls must always be a consideration and nearly always a reality.
A few months ago, a small crisis arose in the Internet security world—the infamous “Ping of Death”. Somewhere in the BSD socket code, there was a check missing on the size of certain fragmented network packets. The result was that after reassembling a fragmented packet, the packet could end up being a few bytes larger than the maximum allowed packet size. Since the code assumed this could never happen, the internal variables were not made larger than this maximum. The result was a very nasty buffer overflow causing arbitrary code to be generated, usually crashing the machine. This bug affected a large community, because it was present in the BSD socket code. Since this code has often been used as a base for new software (and firmware), a wide variety of systems were susceptible to this bug. A list of all operating systems vulnerable to the “Ping of Death” can be found on Mike Bremford's page, located at http://prospect.epresence.com/ping/. A lot of devices other than operating systems were susceptible to this problem—Ethernet switches, routers, modems, printers, hubs and firewalls as well. The busier the machine, the more fatal the buffer overrun would be.
A second reason this bug was so incredibly dangerous was that it was trivial to abuse. The Microsoft Windows operating systems contain an implementation of the ICMP ping program that miscalculates the size of a packet. The maximum packet you can tell it to use is 65527, which is indeed the maximum allowed IP packet. But this implementation created a data segment of 65527 bytes and then put an IP header on it. Obviously, you end up with a packet that is larger than 65535. So all you had to do was type:
ping -l 65527 victim.com
Once this method was known, servers were crashing around the world as people happily pinged the globe.
As you can see from the list on Mike's page, patches are not available for all the known vulnerable devices. And even if they were, the system administration staff would need at least a few days to fix all the equipment. This is a situation where a firewall has a very valid role. If a security problem of this magnitude is found, you can disable it at the access point of your network. If you had a firewall at the time, most likely you filtered out all ICMP packets until you had confirmed that your database servers were not vulnerable. Even though not a single one of these machines should have been vulnerable, the truth is that a lot of them were.
The conclusion we draw from this experience is that the speed and power of response a firewall gives us can be an invaluable tool.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
4 hours 9 min ago
- BASH script to log IPs on public web server
8 hours 36 min ago
12 hours 12 min ago
- Reply to comment | Linux Journal
12 hours 44 min ago
- All the articles you talked
15 hours 8 min ago
- All the articles you talked
15 hours 11 min ago
- All the articles you talked
15 hours 12 min ago
19 hours 37 min ago
- Keeping track of IP address
21 hours 28 min ago
- Roll your own dynamic dns
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?