Linux's Tell-Tale Heart, Part 2
Hello, everyone, and welcome back to the Corner. It's great to see all your smiling faces gathered 'round for another week of getting in touch with your inner Linux system. The way we do that is by listening to what our computers are telling us. As I hinted last time, there is an awful lot of dialog going on in that Linux system of yours, and logs are just part of the picture. Since they are such an important part, however, we will dig a little deeper before moving on to other secrets.
The first thing I want to do this week is compliment the many readers of this column who rightfully looked at my "silly Perl script" of last week and said, "Uh, Marcel, you know there is a better way of doing that, don't you?" Before I tell you the better way, I will bare my soul here and now, and tell you in all honesty that I had not even considered that better way. Thanks to all who wrote for keeping my brain from turning to mush.
The better way to watch a log file for changes is with the tail command's -f flag. Let's say you were keeping an eye out for the evil system cracker, and consequently wanted to keep an eye on your /var/log/secure file. Open up a terminal window, and type this command.
tail -f /var/log/messages
The -f flag tells tail to "follow" the file, until I cancel the command with control-C, tail will continue adding any new content that appears in the file. Again, I thank my readers for keeping me on my toes.
Speaking of log files: sure, the system is busy writing this wordy diary, but there may be situations in which you want to be able to write to the system log yourself. To do this, you use a little program called logger. What logger does is provide you (or your scripts) with a command-line interface to the syslog system. So why, you ask, is this a good thing? I'll start answering by saying that I genuinely admire Real(tm) Programmers. I, for one, am not given to spending hours hacking C code when I can get away with writing a quick and dirty script. A real C programmer would use the syslog libraries and write themselves a classy little dæmon. I, being a lazy system administrator type, would write a script that uses a clever tool written by a C programmer, a tool such as logger. With logger, I can log in a standardized system location (/var/log/messages) where my syslogd dæmon is busy taking care of other business. You might also remember from the last article that syslogd might even be writing to another system's logs as well.
Imagine a script called "natika" that watches some critical system resources on my server. If those resources drop below a certain level, the system writes a message to the system log file, in this case /var/log/messages. The command I would put in my script to accomplish this is below. The -f flag lets you identify the log file you wish to write to. The -i includes logger's process ID.
logger -f /var/log/messages -i "Low on coffee. This is very important."
If I "tail" my /var/log/messages file, I get this result:
# tail -1 /var/log/messages Jul 5 14:32:06 natika logger: Low on coffee. This is very important.
Since my system is listening for natika's syslog messages, I will know right away if something important is happening on the other system.
Along with your own user-generated log entries, you might have noticed it is starting to get rather busy in your system's personal diaries. So how do you know what to look for? Worse, even though running a tail -f of your messages log to your screen is great if you are connected and you happen to be watching at the time, what about the other times? Your logs also demand attention for those times you are not there, and (believe it or not) even systems administrators have to sleep.
This is where logcheck from Psionic Software comes into play. This is an automated logfile analyser that works while you sleep, and it is available free of charge and GPL'ed. Follow http://www.psionic.com/abacus/logcheck/ to get your copy, then come right back for the details.
The latest version is logcheck-1.1.1.tar.gz. To install it, extract the files into a temporary directory and build the software. You might want to read the whole INSTALL file (although I will give you the quick and dirty). The file contains some good stuff about securing your log files. Odds are that if you are running a standard Linux distribution, the permissions on your log files are probably fine (rw-------).
Logcheck consists (more or less) of two programs, logcheck.sh and logtail. The first, logcheck.sh, is a script that walks through your log files, notes any weirdness, and reports back. The second, logtail, remembers where in your log files it last checked so that it doesn't duplicate or repeat itself in feeding information to logcheck.sh. There are also a few additional configuration files, which we will cover a little later. Here's what you do to install the package:
tar -xzvf logcheck-1.1.1.tar.gz cd logcheck-1.1.1 make linux
The install runs pretty quick, and in a matter of seconds, you are ready to roll. The first thing we need to do is a little local configuration. Using your favorite editor (yes, I am a vi guy), modify the /usr/local/etc/logcheck.sh script in this way. A little ways down, you will notice an entry that looks like this:
# Person to send log activity to SYSADMIN=root
The reports generated will then be e-mailed to that user. On our system, I use SYSADMIN=security. "security" is a mail alias which e-mails a handful of people in different locations (just in case). If something terrible were to happen (i.e., the evil cracker strikes), I still have the evidence, because it has been mailed off system.
You will also find another section titled "LOG FILE CONFIGURATION SECTION", where the log files monitored by logcheck are located. You can add or delete files as needed. Here are the ones from my own script file:
$LOGTAIL /var/log/messages > $TMPDIR/check.$$ $LOGTAIL /var/log/secure >> $TMPDIR/check.$$ $LOGTAIL /var/log/maillog >> $TMPDIR/check.$$
Now that everything is configured, we need to set up cron to run the logcheck.sh script on a regular basis. Here is a sample entry for a root crontab (remember, you can add a cron entry with the command crontab -e). In this example, logcheck.sh will run four times every hour at 15-minute intervals.
0,15,30,45 * * * * /usr/local/etc/logcheck.sh
When logcheck runs, it divides the report into three main sections: "Active system attacks", "Security violations" and "Unusual system events". Note that some of these items may be reported in all three areas, such as anything that qualifies as an ACTIVE SYSTEM ATTACK. The keywords that will trip such a message are in one of those other files you might remember me mentioning earlier, in this case "logcheck.hacking". Three other files are called "logcheck.violations", "logcheck.violations.ignore" and just plain old "logcheck.ignore". You will find them all in /usr/local/etc.
logcheck.hacking has nasty little keywords like ATTACK, LOGIN FAILURE and so on. Messages matching anything in this file are sent to your e-mail address, with the subject line reading "ACTIVE SYSTEM ATTACK". Apparently, this is designed to get your attention, and it does. Messages matching the keywords in logcheck.violations will show up under the "Security violations" heading. The last file, logcheck.violations.ignore, is exactly what it sounds like: a list of keywords for logcheck to ignore. For instance, in the case of my own internal network, I tell logcheck to ignore anything that has "192.168.1." by adding that to the file. By default, the only thing in that file is "stat=Deferred".
The last file, logcheck.ignore, applies to any and all types of messages. If you scan through that file, you'll see a fair bit there - named "lame" messages, cron startups, sendmail stats and others. Type more /usr/local/etc/logcheck.ignore to look these over. Like the other files, you can customize them to your needs. WARNING!! You'll be tempted to add a lot of things to these ignore files, due to the volume of information logcheck generates (it's not that bad), but be careful. You don't want to start filtering out important data for the sake of a cleaner report. More information is almost always better than less. Here's a (very) small sample of logcheck's output:
Security Violations =-=-=-=-=-=-=-=-=-= Jul 5 16:05:03 netgate PAM_pwdb: authentication failure; (uid=0) -> +edgarc for pop service Jul 5 16:05:04 netgate ipop3d: Login failure user=edgarc +host=[192.168.1.6] Jul 5 16:06:36 netgate PAM_pwdb: authentication failure; (uid=0) -> +edgarc for pop service
It seems that "edgarc" may have forgotten his password.
As you can see, with a package like this, your job of filtering and looking through logs can be greatly simplified. As you get more comfortable with what you expect to see in those logs, you can customize the keyword files to deliver the information you want. Even with the default files, logcheck is a great little package.
It is time to wrap this up. When next we convene here at the SysAdmin's Corner, we'll look at ways to build on what we have so far to extend our wary eyes to spot script kiddies and system crackers before they do any real damage. So, do I have your attention? Until then, remember: your Linux system is talking to you. Are you listening?
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- Interview with Patrick Volkerding
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide