Netfilter 2: in the POM of Your Hands
While the kernel source is compiling, there's plenty of time to compile and install iptables. Remember to supply the KERNEL_DIR= (if it's not in /usr/src/linux) and the BINDIR=, LIBDIR= and MANDIR= arguments if you don't want your new binaries and extensions installed in /usr/local/.
One small fix before we start compiling. For whatever reason, the NETLINK extension does not compile. So if you chose the NETLINK.patch (as I did) you need to make a minor adjustment. Just cd into the extensions directory and open the Makefile using your favorite text editor. The first line is our shebang line. The second line is blank. The third line starts off PF_EXT_SLIB: and contains various extensions to be made and installed. Add NETLINK to the end of the line and save the file back.
Now cd back up to the root of the iptables source tree and run your make and make install, adding the arguments noted above if required.
Above, we used a modified patch process to patch the kernel. If you, like me, grab kernel patches whenever they come out, you'll find that some will no longer apply cleanly because the kernel sources have been modified. So when I do want to try out a new kernel, I save the old .config file, wipe out the old kernel sources and start fresh. You can do that or remember to save a tarball of your kernel source tree before modifying it.
If you built a modular kernel previously using patch-o-matic (or pending-patches or most-of-pom) and are only adding a few more modules, after you use make oldconfig to add the new modules, you can do a make modules; make modules_install and start using those modules.
If you want to see the information again that you saw while adding the patch-o-matic patches, it's available in the iptables-x.x.x/patch-o-matic/ directory. The files *.patch.help contain the information. In most cases, the examples in these files are duplicated in the kernel configuration help.
Now that we have the modules we want compiled and installed, we're ready to put them to work. But before we start, we need to decide exactly what we're going to do. In order to do that we need to lay some groundwork. This groundwork isn't so important when all we have is our home system and we want to let everyone inside out but keep everyone outside out as well. Our state table alone practically assures us that's what we'll have; add masquerading or SNAT and we're done. This is what we had using the basic scripts from the “Taming the Wild Netfilter” article (September 2001 LJ).
But firewalls in use at companies are rarely so simple. They demand that we first understand (and maybe even restructure) our network topology. We also need to understand exactly what it is we want our firewall to accomplish. Sometimes, this is not much more than for our home system, but often it is radically different. We can use the company's network security policy to assist us (we do have an NSP, don't we?), plus some knowledge of what we want from our network access. We won't discuss risk assessments here [see Mick Bauer's “Practical Threat Analysis and Risk Management” in the January 2002 issue of LJ], but their findings should be kept in mind to help guide us in the overall scheme.
Many years ago we talked about our internet-connected hosts. They were all directly connected to the Internet. No big deal, as all the system administrators knew each other and things were friendly. Then everyone else discovered the Internet, and we had to make some changes. As things mushroomed out of control, we forgot or never knew who our neighbor system administrators were. We found our systems under attack. So we left our public systems directly connected but started hiding our users' hosts behind packet filters to help protect them. The systems between our router and packet filter were said to be on our DMZ, or demilitarized zone. The rest were on our trusted network behind the packet filter.
Today, few companies would configure their systems this way. In our current situation, usually only honeypots are deliberately left defenseless. Today, the two most common configurations either have a firewall with two internal NICs (one for the trusted network and one for the internal, public access network) or two separate firewalls (the first allowing public traffic into a controlled, but not trusted network and a second permitting entry into our inner sanctum or trusted network).
While small companies may mix the trusted and controlled networks on one private internal network, it is best to keep these separated whenever possible. You also should control who has access to which area. Firewalls do a lot to keep bad guys out, but do little to protect against bad guys already inside. In fact, you may find it's prudent to put a firewall up between accounting and marketing and engineering and production. None has much business in any of the other's files.
Because this article is principally about iptables, I'll not cover more on network topology. But we needed to understand the above to see how the configurations below work. They really aren't too much different from the point of view of the external firewall, only the internal one(s), if needed, will look a bit more like the basic firewall I presented in the first article. That is, the internal firewalls won't accept new traffic except from the trusted side. What goes out also can be moderated to an extent, and we'll look at that a little bit also.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
9 hours 44 min ago
- BASH script to log IPs on public web server
14 hours 11 min ago
17 hours 47 min ago
- Reply to comment | Linux Journal
18 hours 20 min ago
- All the articles you talked
20 hours 43 min ago
- All the articles you talked
20 hours 46 min ago
- All the articles you talked
20 hours 48 min ago
1 day 1 hour ago
- Keeping track of IP address
1 day 3 hours ago
- Roll your own dynamic dns
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?