Managing your Logs with Chklogs
One of the attributes that characterizes Unix systems, and therefore Linux, is true multitasking. Because of it, there are usually many processes running on your machine, including the not-so-evil daemons and other important programs such as uucico (Unix-to-Unix copy-in copy-out). If your system is properly configured, those programs leave traces of their activities in the system logs. These logs usually contain lots of data that must be filtered to generate more readable reports. In the case of system problems, these logs are a valuable source of information for tracking, and possibly solving, the problem.
Years ago, when I was faced with the prospect of spending some of my free time looking through raw logs and trimming them so that they wouldn't eat up disk space, I decided it was time to take action—I wrote the program Chklogs. ChkLogs is an acronym for “Check Logs” in the Unix tradition.
Whether you are an experienced Linux user, system administrator or a newcomer to the Linux world, you will certainly find this subject of interest. Although it is mainly a system administration tool, it also has applications in the user world.
In this article I will introduce you to version 2.0 of Chklogs which should be out by the time this is published. Currently version 1.9 release/build 2 (1.9-2) is available, which is the last Perl 4.0x compatible version. Version 2.0 and higher require Perl 5.003—which is not much to ask considering about 99% of registered users have that Perl version.
Here are some of the features offered by Chklogs:
Individual specification of thresholds and action(s) for each log
A choice of compression program
Log management by size or age
Addition of user extensions
Logical Log Groups with pre- and post-processing
Global and local repositories (Alternate Repository feature)
Log shuffling (also known as log rotation)
Requires no programming experience
User resource file
Fully based on Perl 5.003, except for the user interface
Nice Tcl/Tk user interface, available separately (See Resources at the end of this article.)
Logical Log Groups are groups of logs that have something in common; this could be a collection of UUCP logs, INN logs or anything you consider a group. You can use virtually any name; the only restriction is that the name must be valid to create a directory. Chklogs has two built-in groups: syslog and common (reserved).
Alternate Repository (AR) is a directory where the archived logs are stored after they have been processed if they met the threshold condition. By default, archived logs are created with tar and GNU zip. A local AR is a directory named OldLogs under the directory in which the log resides. A global AR is a directory hierarchy (you define the root of this AR) where log archives are divided into logical groups. Here, you will always see the “common” subdirectory where all “orphan” logs go, i.e., those that have not been explicitly declared to belong to a group.
Log Shuffling is well known and is also called log rotation. The “phased-out” log is assigned a number or tag each time until a maximum is reached, at which point the oldest one is removed.
Directory Lumping is another nice feature. Instead of specifying each log separately by name, you supply a directory name. Chklogs treats any non-archived, non-directory file (as recognized by Chklogs) as a log and acts on it according to the action specification. A real-world application of this option is a site that gives UUCP access to a large number of subdomains (very cumbersome to do by hand), and a separate UUCP transfer log is kept for each site. Chklogs always determines first if you have specified a directory or a file.
Every time I make the first release/build of a new version (e.g., v2.0-1), I submit it to Sunsite and Funet FTP sites. Whenever a fix is required (and therefore no new features), I put out a new release/build (e.g., 2.0-2) only on the primary site, my ISP. To make sure that you have the latest version, check out the primary site and/or the Chklogs web page at http://www.iaehv.nl/users/grimaldo/info/.
Now, get “version 2.0 build 1” installed on your system. Unpack it and go to the root of the directory tree:
gunzip chklogs-2.0-1.tar.gz tar xvf chklogs-2.0-1.tar cd chklogs-2.0-1
Under the root tree is the bin directory containing the scripts and modules which comprise the Chklogs package. The /doc directory contains all the necessary documentation, including man pages in both troff and HTML format. A plug-out directory contains some extra utilities to scan your logs—use them as is or as examples to build your own. Last is the /contrib directory. In the root of the tree there is the README file, release notes, a makefile for installation and, most important, the GuideMe script. Type:
GuideMeAnd it will do just that or at least make the attempt . This script performs a probe into your system and indicates which configuration parameters must be changed in which files. Follow its advice closely. In the end, it will ask you if you wish to send in your registration. If you select “no” that's fine. If “yes”, you will receive mail regarding updates and major fixes.
Assuming you have made the necessary configuration parameter changes (mailer, compress program, administrative account, library location, Perl location etc.), you are now ready to actually make Chklogs work for your system. I will also assume it has been installed (see Makefile) in /usr/local/sbin and /usr/local/lib/Degt/ by the make command (make install). The same library directory is shared by the graphical user interface.
If you are not yet sure you wish to commit your logs to Chklogs, you should make a back-up copy of them in a local directory. Run Chklogs on those copies until you feel safe, and you will. Doing this requires use of the resource file (see below).
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Build a Skype Server for Your Home Phone System
- Why Python?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Reply to comment | Linux Journal
14 min 13 sec ago
- Reply to comment | Linux Journal
1 hour 4 min ago
- Not free anymore
5 hours 6 min ago
8 hours 53 min ago
- Reply to comment | Linux Journal
9 hours 1 min ago
- Understanding the Linux Kernel
11 hours 16 min ago
13 hours 45 min ago
- Kernel Problem
23 hours 48 min ago
- BASH script to log IPs on public web server
1 day 4 hours ago
1 day 7 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?