SMART (Smart Monitoring and Rebooting Tool)
Listing 3. Sample Output of the smart -d Command
[sysman@server ~]# ./smart -d SERVICE PID PROCS STATUS PROBLEM ------- ----- ----- ------ ------- CRON 451 1 [OK] DISK ? 0 [OK] No start command. DHCP 444 1 [OK] DNS 442 1 [OK] HTTP 625 53 [WARN] Too many processes (>30). LPD 474 1 [OK] MRTG 27017 1 [OK] MYSQL 627 1 [OK] NAGIOS 640 1 [OK] NMB 633 1 [OK] NTP ? 1 [OK] POSTFIX 619 0 [DOWN] No response from service. [Starting...] ->POSTFIX 23945 1 [OK] POSTGRES 560 3 [OK] SLAP 643 1 [OK] SMB 631 6 [OK] SNMP 635 1 [OK] SNMPTRAP 637 1 [OK] SSH 654 3 [OK] SYSLOG 402 1 [OK] XINET 462 1 [OK]
There are some optional executables files, the check scripts, responsible for checking whether the monitored services really are operative and responding to petitions. These files are written in Shell (.sh extension) and Expect (.exp extension). Expect is a tool that requires Tcl and allows for automation of interactive applications that use textual representation.
These scripts could be written in any programming language, because only the exit status is taken into account. If it's not equal to 0, we suppose that there has been no answer or that the answer given by the service has not been the expected one. This means that a check script not only can monitor services, but it also can achieve any check that returns a Boolean value, for example, to check whether the size of a directory exceeds a certain value, whether the amount of logged users is greater than a desired number, whether a kernel module is loaded and so on (Listing 4).
Listing 4. A Sample of the nag and Shell Scripts
[root@server /]# ls /home/sysman/scripts/ disk.nag http-forb.nag nfs.nag pop3.nag smtp.nag disk.sh http.nag nfs.sh printer.nag snmp.nag dns.nag http.sh nmb.sh proxy.nag ssh2.nag dns.sh imap.exp ntp.sh slap.nag ssh.nag ftp.exp imap.nag pgsql2.nag slap.sh ssh.sh ftp.nag mysql.nag pgsql.nag smb.nag http-auth.nag mysql.sh pgsql.sh smb.sh http.exp nagios.nag pop3.exp smtp.exp
Files with the .nag extension are also Shell scripts, but unlike the former ones, they call an external program (plugin) passing to it the parameters received from check-service, following the order and format that the plugin expects. This checks the service and returns the information gathered to the check script, which will interpret and convert it into the exit status that check-service is waiting for (Listing 5).
Listing 5. nag scripts are handled by plugins.
[root@server /]# ls /home/sysman/plugins/ check_disk check_http check_pgsql check_snmp check_udp check_dns check_imap check_pop check_ssh check_ftp check_nagios check_smtp check_tcp
Plugins are programmed in C, Perl and Shell and belong to Nagios. Their sources can be downloaded independently of the Nagios distribution, and some of them require the additional installation of certain programs and libraries.
Software requirements include the following:
sudo: allows a user to execute a command as another user. This will be necessary if you are planning to allow a nonroot user to execute SMART.
awk: a pattern scanning and processing language. SMART uses it and expects to find it at /bin/awk. If that's not your case, edit the check-service and smart files of the SMART distribution and modify the line where AWK=“/bin/awk” is specified.
Nagios plugins: sources can be downloaded independently of the Nagios distribution, and some of them require the additional installation of certain programs and libraries. You can use the plugins distributed with SMART or download the newest ones.
Some shell scripts (in the scripts directory of SMART) may require some specific commands to check some services, such as dig for dns, wget for Web services, nmblookup for nmbd (Samba), ntpq for NTP, ldapsearch for OpenLDAP and so on. The paths of these commands are defined in a variable at the beginning of each script, so you can change their location, use any other command that might work better for your system or even rewrite the whole script at your convenience.
With sudo you can permit another user to run SMART. If you're not interested in creating such a user, you can omit steps 1, 2 and 3 below.
Create user sysman and group sysman.
Create the SMART directory. It's a good idea to install it at sysman home and to set the appropriate owner and permissions:
mkdir /home/sysman chown root:sysman /home/sysman chmod 750 /home/sysman
Edit the sudo configuration file /etc/sudoers, and add the following lines:
... sysman hostname=(root) NOPASSWD: /home/sysman/check-service sysman hostname=(root) NOPASSWD: /sbin/reboot
Download the SMART software.
Untar and unzip the distribution:
tar -zxf smart-X.Y.tar.gz
Go to the distribution directory and copy the files to the destination directory. If you choose a destination different from /home/sysman, you will have to edit the smart file and modify the line where dir=“/home/sysman” is specified:
cd smart-X. cp check-service /home/sysman/ cp smart /home/sysman/ cp host.conf.dist /home/sysman/host.conf cp services.conf.dist /home/sysman/services.conf cp -r scripts /home/sysman/ cp -r plugins /home/sysman/
Go to the destination directory, and check/set file permissions and owners:
cd /home/sysman chown -R root:root check-service scripts plugins host.conf services.conf chown root:sysman smart chmod -R 700 check-service scripts plugins chmod 750 smart chmod 644 host.conf services.conf
Configuration is as follows. First, edit the SMART host configuration file host.conf, and modify it according to your preferences (hostname, mail addresses, commands paths and so on). Then, edit the SMART services configuration file services.conf, and uncomment/modify/add any service/dæmon you want to check. Every line describes one service, with the following semicolon-separated parameters:
NAME (non-empty string): descriptive service name (for example, IMAP).
process_name[:port] (non-empty string[:integer]): parent process name and its operational port (for example, couriertcpd:143).
process_param (string): parameters of running process. Some services run with the same process name, so parameters are useful to distinguish them. For example, the parent process of Courier IMAP and POP3 is couriertcpd, but one is executed with the parameter pop3d and the other one with imapd.
max_procs (non-empty integer): the highest number of running processes allowed (for example, 10). Leave it at 0 if what you're monitoring runs no processes (for example, disk space).
min_procs (non-empty integer): the lowest number of running processes allowed (for example, 1). Leave it at 0 if what you're monitoring runs no processes (for example, disk space).
start_command (string): the command to start the service or script to be executed when the service is down (for example, /courier/libexec/imapd.rc).
pid_file (string): pid file path (for example, /var/run/imapd.pid).
sock_file (string): socket file path.
start_mode (0/1): the service can be started/stopped by adding start/stop to the start command (1), or it may not be necessary (0).
check_script (string): the name of the script used to check the service. This script has to be in the scripts directory (for example, imap.nag).
Leave the parameters empty if they are not applicable, except NAME, process_name, max_procs, min_procs and start_mode, which can't be empty.
Now, you should be able to run SMART as user root or sysman:
Try using -h to get more information about available parameters. Running SMART through crond might be a good thing. You can run it as frequently as you want, but doing it every five minutes seems to be reasonable enough.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide