Linux's Tell-Tale Heart, Part 3

Trimming Monster Logs and Advanced Cracker Detection

Welcome back, everyone, to the SysAdmin's Corner. It is time once again to delve deep into the soul of your Linux system, to grasp its subtle mysteries, and to maybe, just maybe, catch a cracker before he does damage.

Log files can get pretty large. An active server is a talkative one, and talk from your Linux system means log entries. Lots of log entries. Over the years, I've seen log files grow unchecked until the system crashes for lack of disk space. Sure, now that we all have 40GB drives on our PCs, it's not as bad, but a mess is a mess and needs cleaning from time to time. I've made jokes about the old days where, without the logrotate command, I had to trim my own log files and walk 14 miles to school (uphill, both directions). Well, the logfile trimming part is true, and somewhere along the way, it occurred to me that not everybody has logrotate on their system.

In case you don't know, logrotate is a nifty little utility written by Erik Troan that takes care of all this ugly business of archiving logs and recreating them. If you are running Red Hat, you almost certainly have logrotate running. In fact, you should see an entry for it in your /etc/cron.daily directory. This is a simple script that calls logrotate with the default configuration, at /etc/logrotate.conf. Another giveaway is the presence of files in your /var/log directory with .1, .2, .3 and .4 extensions. Before I get into the gory details of log rotation, I should probably tell you that the times for execution of your cron.daily, cron.weekly and cron.some_time files can be found in /etc/crontab. This is just a text file, and you can view it with cat /etc/crontab.

Now, back to logrotate. If you do not have the program on your system, the source for logrotate is available from

Then, extract and build logrotate:

     tar -xzvf logrotate-3.3.tar.gz
     cd logrotate-3.3
     make install

The next step is to edit /etc/logrotate.conf which defines what logrotate does and how. Configuration parameters exist in both a global configuration file and one for each subsystem. The global file is (by default) /etc/logrotate.conf while the subsystem specific definitions are in the directory /etc/logrotate.d. For instance, in my default global file, I have the following parameters:

     rotate 4
     errors marcel
     include /etc/logrotate.d

The "rotate" parameter tells the system to keep four copies of archived logs. The fifth time will dump the ".4" file specified. The "create" keyword tells logrotate to create a new empty file after archiving. "errors marcel", meanwhile, tells logrotate to e-mail me with any errors. Normally, this is set to root. The next line is a Red Hat line that defines each package to log individually. The directory listed is where these files are kept. You can, if you wish, write your own definitions into your /etc/logrotate.conf file. The format is like this:

     "/var/log/some_log_file" {
          rotate 5
               /sbin/killall -HUP syslogd

Each definition, or paragraph, starts with the full path name for the log file, followed by a number of options inside squiggly brackets. Let's take apart the above example. The "rotate 5" will override the default of 4 that I set up in my global config. The "weekly" tells logrotate to rotate the file every seven days rather than the default daily (remember, I have logrotate running in /etc/cron.daily). The program then mails me the latest file, after which it restarts the syslogd daemon. This restart is a kind of a "script within a script", with a "postrotate" parameter to start, followed by one or more commands to execute afterwards. Finally, "endscript" ends this mini-script. Another angle bracket closes off the whole thing.

Even if you have logrotate already installed and set up on your system, things change, and you may decide it makes more sense to change the times at which the process runs and how often. Very recently, I had reason to change it myself. On my Red Hat system, logrotate did a nice job of taking care of my Apache server files, until ... I downloaded a new Apache, recompiled it with mod_ssl, mod_php and a few other things. Red Hat stores the Apache log files in /var/log/httpd while a new Apache install stores its logs in /usr/local/apache/logs. I realize I could have changed all this when I first built Apache, but I did not, and decided to leave it as Apache seemed to like it. The trouble was that my logs were growing and growing with nothing but disk space standing in their way.

On to other things! Last time, I told you about something called logcheck, a little tool for simplifying the process of wandering through your log files looking for things that make you say "Hmmm ..." as you scratch your chin intelligently. One thing you are looking for, of course, is a sign that the dreaded system cracker may be trying to infiltrate your system. The problem with crackers is that they don't just sit there and telnet to your system and try different passwords&nbsp:- not if they want to keep attention away from themselves as they try to break your security. The most likely first attack is a stealth port scan done with a tool like nmap, which we discussed on this very Corner some time ago.

"Stealth" means hidden, meaning normally this information does not show up in your log. The best defense against a potential cracker is to catch them and report them to their ISPs while they are still busy scanning your network for weaknesses. So, you ask, if the scans are stealthed, how are you going to find them? Good question. A good answer comes from those crazy folks at Psionic (the same people who brought you logcheck), a tool called PortSentry. In conjunction with "logcheck", PortSentry is an ideal way to help you identify potential threats to your system. You can pick up the latest version of PortSentry at

The latest version is portsentry-1.0.tar.gz. To install it, extract the files into a temporary directory and build the software.

     tar -xzvf portsentry-1.0.tar.gz
     cd portsentry-1.0
     make linux
     make install

Before you actually type "make", you might want to read the README.install file, which will give you far more detail than I can give you in a short article. You may also want to modify the path to certain files by editing portsentry_config.h before you compile. On my system, I simply took all the defaults and went ahead with the compile.

After you have built and installed the program, you will probably want to edit the portsentry.ignore file. You'll find it in the directory /usr/local/psionic/portsentry/. This file contains a list of IP addresses you do not want blocked. By default, you will have and listed. It's a good idea to put in your local host address here.

Next, you should edit the portsentry.conf file, located in the same directory as portsentry.ignore. Look for the section that talks about the KILL_ROUTE command. Depending on your OS or your kernel revision, you may be using either route, ipfwadm or ipchains to dynamically block offending traffic. Here's the section I uncommented for my Red Hat 6.1 system. Note that you can uncomment only one KILL_ROUTE command.

    # New ipchain support for Linux kernel version 2.102+
    KILL_ROUTE="/sbin/ipchains -I input -s $TARGET$ -j DENY -l"

The portsentry.conf file has a number of other options, and I recommend you at least take a look at it. For instance, you can configure PortSentry to run specific commands when a scan is detected. The text suggests paging you. Psionic also includes some warnings about how your should react on detecting a scan. Rightfully so. Here's another Marcel soapbox speech, shortened so as not to try your patience.

[Marcel steps onto one of his soapboxes.] You have to use your judgement. Just because some people are out to get you doesn't mean everybody is. A single port scan is not necessarily a break-in attempt. For instance, I have on occasion started my web server without SSL. Someone attempting to connect with "https://" instead of "http://" will set off alarm bells. I don't want to report this person for my forgetfulness. [Marcel now steps off his soapbox.] Of course, if one address scans all your ports, they are likely up to no good.

Finally, you want to start PortSentry. I use the most sensitive mode, and I always restart it at boot time by putting it in my /etc/rc.d/rc.local script. Here's how you run it in advanced TCP mode:

     /usr/local/psionic/portsentry/portsentry -atcp

When portsentry detects an attempt at break-in, or a port scan, it can automatically lock out that address by putting an entry in your /etc/hosts.deny file, thereby denying all TCP-wrapped services. As I mentioned above, it will also issue route commands to block the offender, or add rules to your firewall configuration with "ipfwadm" or "ipchains". The following is a sample of an actual scan of my system, as reported by portsentry and logcheck. The actual report was several hundred lines long, since the would-be cracker scanned EVERY port on my system, whether real or imaginary. I know you'll thank me for keeping it as short as possible. I've also mangled the IP address to hide the origin of the scan.

   Active System Attack Alerts
   Jul  7 21:46:04 website portsentry[462]: attackalert:
   SYN/Normal scan from host: to TCP port: 449
   Jul  7 21:46:04 website portsentry[462]: attackalert: Host has been blocked via wrappers with string: "ALL:"
   Jul 13 17:44:53 website portsentry[31279]: attackalert: 
   Host has been blocked via dropped route using command: 
   "/sbin/ipchains -I input -s -j DENY -l"

There's a lot more, with hundreds of ports being listed, but you get the idea. The logcheck program then e-mails this information to me off-site, where I can study it and take appropriate action. This is by no means a sure-fire cure-all against break-ins. Always keep an eye on your system and make regular backups. Watching for possible intruders is no guarantee that no one will ever break through your security. As I've said before, you have to sleep sometime. Nothing is perfect, but staying on top of and monitoring your logs regularly, as well as using tools like logcheck and PortSentry to highlight trouble spots, will do wonders for prevention. Until next time, remember that your Linux system is talking to you. Are you listening?


One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix