swatch: Automated Log Monitoring for the Vigilant but Lazy
So far we've only considered actions we want to be triggered every time a given pattern is matched. There are several ways we can control swatch's behavior with greater granularity, however.
The first and most obvious way is to take advantage of the fact that search patterns take the form of regular expressions. Regular expressions, which really constitute a text-formatting language of their own, are incredibly powerful and responsible for a good deal of the magic that Perl, sed, vi and many other UNIX utilities can do.
It behooves you to know at least a couple of “regex” tricks, which I'll describe here. Trick number one is called alternation, and it adds a “logical or” to your regular expression in the form of a “|” sign. Consider this regular expression:
This expression will match any line containing either the word “reject” or the word “failed”. Use alternation when you want swatch to take the same action for more than one pattern.
Trick number two is the Perl-specific regular expression modifier “case-insensitive”, also known as “slash-i” since it always follows a regular expression's trailing slash. The regular expression /reject/i matches any line containing the word “reject”, whether it's spelled “Reject”, “REJECT”, “rEjEcT”, etc. Granted, this isn't nearly as useful as alternation, and in the interest of full disclosure, I'm compelled to mention that slash-i is one of the more CPU-intensive Perl modifiers. However, if despite your best efforts at log-tailing, self-attacking, etc., you aren't 100% sure how a worrisome attack might look in a log file, slash-i helps you make a reasonable guess.
If you wish to become a regular expression archimage, I recommend the book Mastering Regular Expressions by Jeffrey E. F. Friedl. See Resources for details.
Another way to control swatch to a greater degree is to specify what time of day a given action may be performed. You can do this by sticking a “when=” option after any action. For example, below I've got a .swatchrc entry for a medium-importance event I want to know about via console messages during weekdays, but I'll need e-mail messages to know about it during the weekend. To do this I set the when option:
/file system full/ echo=red mail addresses=mick\@visi.com, subject=Volume_Full,when=7-1:1-24
The syntax of the when= option is when=range_of_days:range_of_hours. Thus, we see that any time the message “file system full” is logged, swatch will echo the log entry to the console in red ink. It will also send e-mail, but only if it's Saturday (“7”) or Sunday (“1”).
swatch expects .swatchrc to live in the home directory of the user who invokes swatch. swatch also keeps its temporary files there by default (each time it's invoked it creates and runs a script called a “watcher process”, whose name ends with a dot followed by the PID of the swatch process that created it).
The -c path_to_configfile and --script-dir=path flags let you specify alternate locations for swatch's configuration and script files, respectively. Never keep either in a world-writable directory, however. In fact, only these files' owners should even be able to read them.
For example, to invoke swatch so it reads my custom configuration file in /var/log and also uses that directory for its watcher process script, I'd use this command:
swatch -c /var/log/.swatchrc.access --script-dir=/var/log &
I also need to tell swatch which file to tail, and for that I need the -t filename flag. If I wanted to use the above command to have swatch monitor /var/log/apache/access_log, it would look like this:
swatch -c /var/log/.swatchrc.access --script-dir=/var/log \ -t /var/log/apache/access_log &swatch generally doesn't clean up after itself very well; it tends to leave watcher-process scripts behind. Keep an eye out for and periodically delete these in your home directory or in the script directories you tend to specify with --script-dir.
Again, if you want swatch to monitor multiple files, you'll need to run swatch multiple times, with at least a different tailing-target (-t value) specified each time and probably a different configuration file for each as well.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
- BASH script to log IPs on public web server
10 min 45 sec ago
3 hours 46 min ago
- Reply to comment | Linux Journal
4 hours 18 min ago
- All the articles you talked
6 hours 42 min ago
- All the articles you talked
6 hours 45 min ago
- All the articles you talked
6 hours 46 min ago
11 hours 11 min ago
- Keeping track of IP address
13 hours 2 min ago
- Roll your own dynamic dns
18 hours 16 min ago
- Please correct the URL for Salt Stack's web site
21 hours 27 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?