swatch: Automated Log Monitoring for the Vigilant but Lazy
Previously the Paranoid Penguin has pondered a plethora of powerful programs pursuant to protecting people's PCs from pernicious punks. [The right to excessive alliteration revocable at any time—Ed.] One important feature these tools share is logging; just as important as keeping system crackers out is knowing when they've tried to get in. But who's got the time or attention span to sift through scads of mostly innocuous log files on every system they administer, every single day?
swatch (the “Simple WATCHer”) does. swatch, written 100% in Perl, monitors logs as they're being written to and takes action when it finds something you've told it to look for. This simple, flexible and useful tool is a must-have for any healthily fearful system administrator.
There are two ways to install swatch. First, of course, is via whatever binary package of swatch, if any, your Linux distribution of choice provides. The current version of Mandrake has an RPM package of swatch, but none of the other more popular distributions (i.e., Red Hat, SuSE, Slackware or Debian) appear to include it.
This is just as well, though, because the second way to install swatch is quite interesting. swatch's source distribution, available from www.stanford.edu/~atkins/swatch, includes a sophisticated script called Makefile.PL. The script automatically checks for all necessary Perl modules and uses Perl 5's CPAN functionality to download and install any needed modules; it then generates a Makefile that can be used to build swatch.
After you've installed the required modules, either automatically from swatch's Makefile.PL script or manually (and then running perl Makefile.PL), Makefile.PL should return the following:
[root@barrelofun swatch-3.0.1]# perl Makefile.PL Checking for Time::HiRes 1.12 ... ok Checking for Date::Calc ... ok Checking for Date::Format ... ok Checking for File::Tail ... ok Checking if your kit is complete... Looks good Writing Makefile for swatch [root@barrelofun swatch-3.0.1]#
Once Makefile.PL has successfully created a Makefile for swatch, you can execute the following commands to build and install it:
make make test make install make realcleanThe make test command is optional but useful; it ensures that swatch can properly use the Perl modules we took the trouble to install.
Since the whole point of swatch is to simplify our lives, configuring swatch itself is, well, simple. swatch is controlled by a single file, default $HOME/.swatchrc. This file contains text patterns in the form of regular expressions you wish swatch to watch for. Each regular expression is followed by the action(s) you wish swatch to take whenever it encounters that text.
For example, suppose you've got a web server, and you want to be alerted any time someone attempts a buffer-overflow attack by requesting an extremely long filename. By trying this yourself against the web server while tailing its /var/apache/error.log, you know that Apache will log an entry that includes the string “File name too long”. Suppose further that you want to be e-mailed every time this happens. Here's what you'd need to have in your .swatchrc file:
watchfor /File name too long/ mail addresses=mick\@visi.com, subject=BufferOverflow_attempt
As you can see, the entry begins with a “watchfor” statement, followed by a regular expression. If you aren't proficient in the use of regular expressions yet (you are planning to learn regular expressions, aren't you?), don't worry: this can be as simple as a snippet of the text you want swatch to look for, spelled out verbatim between two slashes.
swatch will perform your choice of a number of actions when it matches your regular expression. In this example we've told swatch to send e-mail to mick\@visi.com, with a subject of BufferOverflow_attempt. Note the backslash before the @ sign; without it, Perl will interpret the @ sign as a special character. Note also that if you want spaces in your subject line, each space also needs to be escaped with a backslash, e.g., subject=Buffer\ Overflow\ attempt. Actions besides sending e-mail include those seen in Table 1.
For more details on configuring these and the other actions swatch supports, see the swatch(1) man page.
Let's take our example a step further. Suppose, in addition to being e-mailed about buffer-overflow attempts, you want to know whenever someone hits a certain web page, but only if you're logged on to a console at the time. In the same .swatchrc file, you'd add something like this:
watchfor /wuzza.html/ echo=red bell 2
The event will then cause a beep and print to the console.
It's important to note you will only see these messages and hear these beeps if you are logged on the console in the same shell session from which you launched swatch. If you log out to go get a sandwich, when you return and log back in, you will no longer see messages generated by the swatch processes launched in your old session, even though those processes will still be running.
When in doubt add either a “mail” action or some other non-console-specific action, e.g., an “exec” action that triggers a script that pages you. Unless, that is, the pattern in question isn't critical.
Alert readers have no doubt noticed that the scenario in the previous example will work only for Apache installations in which both errors and access messages are logged to the same file. We haven't associated different expressions with different watched files, nor can we do so. But what if you want swatch to watch more than one log file?
No problem. While each .swatchrc file may describe only one watched file, there's nothing to stop you from running multiple instances of swatch, each with its own .swatchrc file. In other words, .swatchrc is the default but not the required name for swatch configurations.
To split the two examples into two files, therefore, you'd put the lines in the previous simple .swatchrc entry into a file called, say, .swatchrc.hterror, and the lines in the previous watchfor entry into a file called .swatchrc.htaccess.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Why Python?
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Tech Tip: Really Simple HTTP Server with Python
2 hours 28 min ago
- Kernel Problem
12 hours 31 min ago
- BASH script to log IPs on public web server
16 hours 58 min ago
20 hours 34 min ago
- Reply to comment | Linux Journal
21 hours 6 min ago
- All the articles you talked
23 hours 30 min ago
- All the articles you talked
23 hours 33 min ago
- All the articles you talked
23 hours 34 min ago
1 day 3 hours ago
- Keeping track of IP address
1 day 5 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?