Automating Firewall Log Scanning
Firewalls are computers dedicated to filtering particular kinds of network traffic between two networks. They are usually employed to protect a LAN from the rest of the Internet. Securing every box on the LAN is much more costly and time consuming than deploying, administering and monitoring a single firewall. A firewall is particularly essential to those institutions permanently connected to the Internet. Depending on the network configuration, the router can be set up as a packet filter; usually, though, it is more convenient to set up a dedicated box to act as a firewall. Because they can be made extremely secure and have a low cost, Linux boxes can be very effective firewalls.
Deploying a firewall on the Linux kernels 2.2.x is done with ipchains, while iptables are used on the new 2.4.x kernels. How to set up the actual firewall is beyond the scope of this article; we refer the reader to the ipchains HOWTO for the 2.2.x kernels and to Paul “Rusty” Russell's Packet-Filtering HOWTO for the 2.4.x kernels. Both of them can be found on the Internet by using any search engine. But building the actual firewall is not enough; in order to offer tight security, a firewall needs to be monitored. In this article we explain how to build and use a web-based ipchains monitoring system called inside-control.
There are two main uses of a firewall monitoring system: to check that no malicious cracker is trying to wreak havoc in the internal LAN and to check that users inside the LAN are not abusing the internet service.
Here is a setup for a very simple firewall to which we will refer as a working example later in the article.
Suppose, for example, that the internal network is 10.0.1.0/255.255.255.0, the Linux gateway/firewall has the addresses 10.0.1.1 on the interface connected to the internal LAN and 10.200.200.1 on the interface connected to the Internet (both IP addresses are in fact nonroutable, so this is just a fictitious example). The first step to setting up a firewall is to enable gatewaying between the network interfaces:
echo 1 > /proc/sys/net/ipv4/ip_forward
We then proceed to build up a logging firewall using ipchains. First we flush all preceding rules, and we allow packets on the loopback interface and all ICMP packets:
ipchains -F ipchains -A input -i lo -j ACCEPT ipchains -A input -p ICMP -j ACCEPTNow we block and log the Telnet protocol from the Internet to the internal LAN:
ipchains -A input -p TCP -s 0.0.0.0/0 -d 10.0.1.0/24 23 -l -j DENYBut we allow and log the HTTP protocol from the internal LAN to the Internet:
ipchains -A input -p TCP -s 10.0.1.0/24 -d 0.0.0.0/0 80 -l -j ACCEPTFinally we set up permissive policies:
ipchains -P input ACCEPTThis firewall blocks and logs all incoming Telnet connections, it allows and logs all outgoing HTTP connections, and it allows everything else (see Figure 1). Such a setup is too permissive for serious protection, but it will illustrate well what the automated log scanning script can do.
The file the firewall outputs its logs to is usually either /var/log/syslog or /var/log/messages. In order to find out which one, you can do
grep -q "Packet log" /var/log/syslog && echo yes
If it outputs “yes” then it is /var/log/syslog, if it outputs nothing it is most probably /var/log/messages. You can confirm with
grep -q "Packet log" /var/log/messages && echo yesIf both commands produce no output, then the firewall is inactive or there was no logged traffic (in our example, Telnet and HTTP) through the firewall.
Regarding the 2.4.x kernels and iptables, things are a bit more complicated. First you must remember to compile the kernel with all of the packet-filtering options, including the LOG target. Second, change ipchains to iptables. Then change the names of the chains to uppercase (e.g., input becomes INPUT). Next, change the name of the targets (DENY becomes DROP). Lastly, specify port numbers in a different way. Listing 1 is the 2.4.x sequence of commands equivalent to the 2.2.x sequence of commands given above.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Mediated Reality: University of Toronto RWM Project
- New Products
- Using Salt Stack and Vagrant for Drupal Development
- Dart: a New Web Programming Experience
- OpenOffice.org Off-the-Wall: ToCs, Indexes and Bibliographies in OOo Writer
41 min 32 sec ago
- Kernel Problem
10 hours 44 min ago
- BASH script to log IPs on public web server
15 hours 11 min ago
18 hours 47 min ago
- Reply to comment | Linux Journal
19 hours 19 min ago
- All the articles you talked
21 hours 43 min ago
- All the articles you talked
21 hours 46 min ago
- All the articles you talked
21 hours 47 min ago
1 day 2 hours ago
- Keeping track of IP address
1 day 4 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?