Netfilter 2: in the POM of Your Hands
There are times we might want to run iptables on a nonfirewall system. Despite the advice you may have read (as noted in the last paragraph of the “Preparing Your System for an iptables Upgrade” section above), there are times you'll want to run iptables on simple hosts. The simplest, but most common example would be a student system on a university network. In this case, you really should trust no other system. So you'll probably want to accept only related, established traffic.
Another example might be if you have decided to use an XDM server where most users work, but your internet policy only permits certain employees rights to surf the Web. How to deal with this? Well, fortunately, we can deal with this fairly simply with rules like the following:
iptables -t filter -I OUTPUT -p tcp --dport 80 -m owner --uid-owner 500 -j REJECT iptables -t filter -I OUTPUT -p tcp --dport 80 -j ACCEPT // only required if OUTPUT // policy is DROP/REJECT
iptables -t filter -I OUTPUT -p tcp --dport 80 -m owner --uid-owner 500 -j ACCEPT iptables -t filter -I OUTPUT -p tcp --dport 80 -j REJECT // only required if OUTPUT // policy is ACCEPTNaturally, you'd need a list of either those permitted access or those denied. Also, you wouldn't want to write individual rules. I suggest handling the rules like this: for i in cat surfweb.txt, do
iptables -t filter -I OUTPUT -p tcp --dport 80 -m owner --uid-owner $i -j REJECT doneJust create a list of users to REJECT (or to ACCEPT and change the rule to match) as the file surfweb.txt. Add user IDs to this list as needed. You might find the above construct valuable for other repetitive rules as well. Note, however, this only prevents them from surfing from the XDM server, not from their local system.
So how might they be stopped from surfing from their local system? Well, the firewall simply could drop or reject packets coming from the disallowed IP. Easy, right? I mean, this is what packet filters are all about. But wait, we're using DHCP and don't necessarily know in advance what the IP will be. Looks like we've outsmarted ourselves—or have we? While we may not know the IP address, one thing we can know is the MAC address. So we get a list of MAC addresses from the systems (or via arp, or from the dhcpd.leases file). Then we use a rule like the following:
iptables -t filter -I FORWARD -i eth0 -m mac --mac-source <MAC> -j ACCEPT iptables -t filter -A FORWARD -i eth0 -p tcp --dport 80 -m state --state NEW -j DROP
This is best done in a loop like we did earlier, with the MAC addresses in one file and then looping through them.
Note: to use the MAC address to permit or deny systems, remember that they must be on your local network—that is, directly connected, via a hub, to the firewall. If the systems in question are behind an internal firewall, and not connected on the same LAN segment as the external firewall, you must put this rule on the inner firewall.
My point here is crucial: you must know what the system with the rule on it can know about packets it is to control. Only a system where packets originate can know which user ID belongs to the process originating the packets. Only a system on the local LAN segment can know the MAC address of an originating system. After that, we have only the information available in the IP header.
This month we looked at Rusty's patch-o-matic, installing an updated kernel and the user-land iptables utility. Probably the most important part is making sure that if things go wrong you can recover. Meanwhile, Rusty has worked very hard on ensuring you don't need to recover. After that we looked at a couple of common network configurations. You'll need to remember these when you dive into next month's Kernel Korner, which will be a part two of this article. Finally, we took a quick look at how and where iptables might be used in nonfirewall situations to control network resources.
Next month we'll look at managing services behind our NAT-ed firewall, specifically how to make the most use of the IPs your ISP has assigned, and how, with this configuration, to handle services like e-mail properly. We'll also look at more matches, targets, tables and some common errors when building rules.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- LiveCode Ltd.'s LiveCode
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide