SMART (Smart Monitoring and Rebooting Tool)
There are a lot of excellent monitoring tools (Big Brother, Nagios and so on), and some of them allow recovery from dead services, but with great complexity in their configuration, which becomes even more complicated when you want to supervise local services that are not remotely accessible, such as syslog, xinet, mrtg, iptables or Nagios itself.
The purpose with SMART was to have a simple, flexible and quick-to-implement application for monitoring the most critical system dæmons that made it possible to add new ones without modifying the code and to avoid installation and configuration complexities. It also needed to be capable of making decisions and solving problems (or at least trying to do that).
After a first version of “passive” monitoring, we tried to go a step further and obtain an “active” application, that is to say, to add the possibility of auto-recovery. By executing the application periodically through crond, it should detect dæmons that were down and boot them without the intervention of the system administrator.
Later, we considered the possibility that a nonprivileged user could execute this application from a console or remotely (via Telnet or SSH). Centralization of detection and error recovery in only one script made integration with sudo easier. Furthermore, it allowed delegating some stronger recovery actions needed in critical situations, such as rebooting the whole system, to this nonroot-privileged user.
With the ps command, we can list all the active processes in the system, but being “active” is not the same as being “operative”, so this led us to include the check scripts, which are small programs to test services and determine whether they really are operative and answering requests. The difficulties we found suggested that we not waste efforts re-inventing the wheel and profit from plugins included in Nagios (monitoring software that we were using satisfactorily for almost three years).
The distribution of SMART has two shell scripts (smart and check-service), two configuration files (host.conf and services.conf) and two directories (scripts and plugins), which contain the check scripts and the plugins (Listing 1).
Listing 1. The SMART Installation Files and Directories
[root@server /]# ls -la /home/sysman/ drwxr-x--- 4 root sysman 4096 May 27 11:49 . drwxr-xr-x 3 root root 4096 Jul 8 2003 .. -rwxr-x--- 1 root sysman 1448 May 27 11:51 smart -rwx------ 1 root root 7815 May 27 11:51 check-service -rw-r--r-- 1 root root 242 May 27 11:49 host.conf drwx------ 2 root root 4096 Apr 29 13:38 plugins drwx------ 2 root root 4096 Apr 29 13:39 scripts -rw-r--r-- 1 root root 883 May 17 10:40 services.conf
Permissions of files and directories allow a nonprivileged user called sysman to execute the application, but deny sysman the ability to modify the contents to use it in an inadequate way.
The SMART program reads the configuration files services.conf and host.conf and executes check-service for each defined service. If a check script has been assigned to a service, for example, services 1 and 2 in Figure 1, check-service will execute it, passing the needed parameters and then will wait for the exit status to determine whether the service is alive. If this check script executes some other external script (plugin), such as service 1 in Figure 1, this one will be responsible for checking the service status.
If no check script has been assigned to a service (service 3 in Figure 1), the check-service file will determine the service status by getting the number of active processes. According to this information, the SMART command-line parameters and the configuration parameters, it will decide what actions to carry out.
Integration with the sudo (superuser do) tool allows the system administrator to permit another user (sysman) to start dead services, restart all the services or reboot the whole system. Advantages of this are:
Simple configuration: there's no need to give privileges to that user to stop and start every service, and no need to use administrative tools (ps, kill, rm and so on). The check-service script centralizes the whole operation.
Security: user sysman can't read, write or execute the check-service file.
Easy to use: scripts are managed by sudo, so its usage will be transparent for the user.
For a user sysman, who needs privileges on the host server, the configuration file of sudo (/etc/sudoers) should be as shown in Listing 2.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide