System Information Retrieval
In issue 39 of Linux Journal (“Is Linux Reliable Enough?”, July, 1997), Phil Hughes writes about down time due to the failure of a hard disk:
At some point we had a configuration disk for our firewall; but when we needed to replace the hard disk, the configuration disk had vanished. This loss cost hours of work time and probably a day of uptime. Having a complete backup of everything, boot disks for all machines, spare cables and disk drives and other assorted parts can make a big difference in the elapsed time to deal with a problem.
I've developed a script to simplify the kinds of Linux system administration difficulties which Mr. Hughes describes. I use the script on all my Linux systems and feel it would benefit other system administrators as well as Mr. Hughes.
I've installed Linux on four Intel Pentium-based systems and seven Intel 486-based systems. All of the 486-based systems had previously been abandoned because they had neither sufficient processing power nor sufficient memory for Windows for Workgroups, Windows 95 or Windows NT, my company's choices for a desktop operating system. All of these 486-based systems run Linux very capably.
I use these Linux systems for network troubleshooting, testing, research, evaluation, experimentation and program development. Installing and using Linux in a large corporate enterprise has helped me learn more about DNS, networking, network programming, HTML and HTTP, system administration and other aspects of the Unix environment.
Although these Linux systems have been extremely useful, the age and diversity of the equipment involved makes system-administration tasks difficult at times. Consider the mix of equipment shown in Table 1, “Linux Systems and Major Components”. (This table also provides a list of the names of the Linux systems I'll be referring to throughout this article). The permutations of five computer vendors, three disk types, seven types of networking cards (the five NE2000 clones are from three vendors), and four CD-ROM types create some interesting installation, configuration and administrative headaches.
I've encountered other significant, system-administration difficulties as well:
The various hardware components of these systems change from time to time as research and evaluation needs dictate.
Because I am trying to win acceptance of Linux within my organization, I perform most of the system-administration functions on my own time.
None of these systems have a working tape backup unit.
These systems are distributed among three locations within the Memphis area. All are interconnected via a metropolitan area network that forms the basis for a method of simplifying system-administration duties.
As if these issues weren't serious enough, soon after installing my sixth Linux system, its hard disk began failing. Since the disk was failing slowly, I had time to recover all the pertinent configuration information to enable me to reinstall and reconfigure Linux quickly after I replaced the failing disk.
Listing 1 shows a shell script I created to ease the chores of maintaining multiple, disparate Linux systems. The script, which I call collect, uses remote shell commands (rsh) and remote copy commands (rcp) to copy a number of files (which are described briefly in the “Collected Files” box) from a remote Linux system to “cuthroat”, my primary system-administration system.
If I lose any Linux file system (except for cuthroat's), I don't have to be concerned about losing important configuration information. As we'll see later, since I propagate all the collected information on cuthroat to several other systems, I don't have to worry about losing cuthroat's file system.
After writing and testing the collect script, I created the /admin directory on cuthroat and moved the script to this directory. When I wish to collect system-administration information from a Linux system (barb, for example) and store that information on cuthroat, I log on to cuthroat and type the following commands:
cd /admin collect barb
If the /admin/barb directory doesn't exist, the collect script creates it, and then begins copying the remote system's files. In the spirit of UNIX brevity, the only screen output is a single line:
barb: copying /proc, .config, lilo.conf, partition infoThis line, built by several echo -n command lines and a final echo command line, indicates the progress of the remote operations. Once the collect script finishes, directory /admin/barb on cuthroat contains a copy of barb's system-administration files.
I could, of course, run collect for an arbitrary number of systems as follows:
cd /admin for i in anthrax barb ducktape do collect $i done
After collect executes in the example above, cuthroat's /admin directory is shown in Figure 1.
I can run collect on cuthroat to copy cuthroat's own files (rather than a remote system's files) as shown in the following example:
logon to cuthroat cd /admin collect cuthroat
If cuthroat's .rhost file names itself, the collect script will execute correctly and copy the collected files into cuthroat's /admin/cuthroat directory.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide