Linux's Tell-Tale Heart, Part 2

A little help when listening to your system's heart ...

Hello, everyone, and welcome back to the Corner. It's great to see all your smiling faces gathered 'round for another week of getting in touch with your inner Linux system. The way we do that is by listening to what our computers are telling us. As I hinted last time, there is an awful lot of dialog going on in that Linux system of yours, and logs are just part of the picture. Since they are such an important part, however, we will dig a little deeper before moving on to other secrets.

The first thing I want to do this week is compliment the many readers of this column who rightfully looked at my "silly Perl script" of last week and said, "Uh, Marcel, you know there is a better way of doing that, don't you?" Before I tell you the better way, I will bare my soul here and now, and tell you in all honesty that I had not even considered that better way. Thanks to all who wrote for keeping my brain from turning to mush.

The better way to watch a log file for changes is with the tail command's -f flag. Let's say you were keeping an eye out for the evil system cracker, and consequently wanted to keep an eye on your /var/log/secure file. Open up a terminal window, and type this command.

     tail -f /var/log/messages

The -f flag tells tail to "follow" the file, until I cancel the command with control-C, tail will continue adding any new content that appears in the file. Again, I thank my readers for keeping me on my toes.

Speaking of log files: sure, the system is busy writing this wordy diary, but there may be situations in which you want to be able to write to the system log yourself. To do this, you use a little program called logger. What logger does is provide you (or your scripts) with a command-line interface to the syslog system. So why, you ask, is this a good thing? I'll start answering by saying that I genuinely admire Real(tm) Programmers. I, for one, am not given to spending hours hacking C code when I can get away with writing a quick and dirty script. A real C programmer would use the syslog libraries and write themselves a classy little dæmon. I, being a lazy system administrator type, would write a script that uses a clever tool written by a C programmer, a tool such as logger. With logger, I can log in a standardized system location (/var/log/messages) where my syslogd dæmon is busy taking care of other business. You might also remember from the last article that syslogd might even be writing to another system's logs as well.

Imagine a script called "natika" that watches some critical system resources on my server. If those resources drop below a certain level, the system writes a message to the system log file, in this case /var/log/messages. The command I would put in my script to accomplish this is below. The -f flag lets you identify the log file you wish to write to. The -i includes logger's process ID.

     logger -f /var/log/messages -i "Low on coffee.  This is very important."

If I "tail" my /var/log/messages file, I get this result:

    # tail -1 /var/log/messages
    Jul  5 14:32:06 natika logger[1355]: Low on coffee.  This is very important.

Since my system is listening for natika's syslog messages, I will know right away if something important is happening on the other system.

Along with your own user-generated log entries, you might have noticed it is starting to get rather busy in your system's personal diaries. So how do you know what to look for? Worse, even though running a tail -f of your messages log to your screen is great if you are connected and you happen to be watching at the time, what about the other times? Your logs also demand attention for those times you are not there, and (believe it or not) even systems administrators have to sleep.

This is where logcheck from Psionic Software comes into play. This is an automated logfile analyser that works while you sleep, and it is available free of charge and GPL'ed. Follow http://www.psionic.com/abacus/logcheck/ to get your copy, then come right back for the details.

The latest version is logcheck-1.1.1.tar.gz. To install it, extract the files into a temporary directory and build the software. You might want to read the whole INSTALL file (although I will give you the quick and dirty). The file contains some good stuff about securing your log files. Odds are that if you are running a standard Linux distribution, the permissions on your log files are probably fine (rw-------).

Logcheck consists (more or less) of two programs, logcheck.sh and logtail. The first, logcheck.sh, is a script that walks through your log files, notes any weirdness, and reports back. The second, logtail, remembers where in your log files it last checked so that it doesn't duplicate or repeat itself in feeding information to logcheck.sh. There are also a few additional configuration files, which we will cover a little later. Here's what you do to install the package:

     tar -xzvf logcheck-1.1.1.tar.gz
     cd logcheck-1.1.1
     make linux

The install runs pretty quick, and in a matter of seconds, you are ready to roll. The first thing we need to do is a little local configuration. Using your favorite editor (yes, I am a vi guy), modify the /usr/local/etc/logcheck.sh script in this way. A little ways down, you will notice an entry that looks like this:

     # Person to send log activity to
     SYSADMIN=root

The reports generated will then be e-mailed to that user. On our system, I use SYSADMIN=security. "security" is a mail alias which e-mails a handful of people in different locations (just in case). If something terrible were to happen (i.e., the evil cracker strikes), I still have the evidence, because it has been mailed off system.

You will also find another section titled "LOG FILE CONFIGURATION SECTION", where the log files monitored by logcheck are located. You can add or delete files as needed. Here are the ones from my own script file:

     $LOGTAIL /var/log/messages > $TMPDIR/check.$$
     $LOGTAIL /var/log/secure >> $TMPDIR/check.$$
     $LOGTAIL /var/log/maillog >> $TMPDIR/check.$$

Now that everything is configured, we need to set up cron to run the logcheck.sh script on a regular basis. Here is a sample entry for a root crontab (remember, you can add a cron entry with the command crontab -e). In this example, logcheck.sh will run four times every hour at 15-minute intervals.

     0,15,30,45 * * * * /usr/local/etc/logcheck.sh

When logcheck runs, it divides the report into three main sections: "Active system attacks", "Security violations" and "Unusual system events". Note that some of these items may be reported in all three areas, such as anything that qualifies as an ACTIVE SYSTEM ATTACK. The keywords that will trip such a message are in one of those other files you might remember me mentioning earlier, in this case "logcheck.hacking". Three other files are called "logcheck.violations", "logcheck.violations.ignore" and just plain old "logcheck.ignore". You will find them all in /usr/local/etc.

logcheck.hacking has nasty little keywords like ATTACK, LOGIN FAILURE and so on. Messages matching anything in this file are sent to your e-mail address, with the subject line reading "ACTIVE SYSTEM ATTACK". Apparently, this is designed to get your attention, and it does. Messages matching the keywords in logcheck.violations will show up under the "Security violations" heading. The last file, logcheck.violations.ignore, is exactly what it sounds like: a list of keywords for logcheck to ignore. For instance, in the case of my own internal network, I tell logcheck to ignore anything that has "192.168.1." by adding that to the file. By default, the only thing in that file is "stat=Deferred".

The last file, logcheck.ignore, applies to any and all types of messages. If you scan through that file, you'll see a fair bit there - named "lame" messages, cron startups, sendmail stats and others. Type more /usr/local/etc/logcheck.ignore to look these over. Like the other files, you can customize them to your needs. WARNING!! You'll be tempted to add a lot of things to these ignore files, due to the volume of information logcheck generates (it's not that bad), but be careful. You don't want to start filtering out important data for the sake of a cleaner report. More information is almost always better than less. Here's a (very) small sample of logcheck's output:

Security Violations
=-=-=-=-=-=-=-=-=-=
Jul  5 16:05:03 netgate PAM_pwdb[3908]: authentication failure; (uid=0) ->
+edgarc for pop service
Jul  5 16:05:04 netgate ipop3d[3908]: Login failure user=edgarc
+host=[192.168.1.6]
Jul  5 16:06:36 netgate PAM_pwdb[3912]: authentication failure; (uid=0) ->
+edgarc for pop service

It seems that "edgarc" may have forgotten his password.

As you can see, with a package like this, your job of filtering and looking through logs can be greatly simplified. As you get more comfortable with what you expect to see in those logs, you can customize the keyword files to deliver the information you want. Even with the default files, logcheck is a great little package.

It is time to wrap this up. When next we convene here at the SysAdmin's Corner, we'll look at ways to build on what we have so far to extend our wary eyes to spot script kiddies and system crackers before they do any real damage. So, do I have your attention? Until then, remember: your Linux system is talking to you. Are you listening?

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState