Linux as a Backup E-mail Server
You've probably guessed by now that I used this very same technique to set up my fall-back e-mail server. Serendipitously, one of the older 486 computers in the department was slated for replacement with a shiny new Pentium-based PC. This computer, with 16MB of RAM, a 600MB hard drive and an Ethernet card was the perfect computer for the job. Setting up the computer was relatively easy. I chose to install the Debian distribution, since I'm most familiar with it. I could just as easily have used any of the other distributions. The components essential to my setup are the latest version of sendmail, the Apache web server (although NCSA, CERN or another web server would have worked) and Perl. I also installed other tools, such as Emacs, gcc, make, rcs and vi to make myself more comfortable while working on the system.
These utilities also allowed me to rebuild the kernel without using another system, and they should come in handy if I ever have to repair the system after some sort of mishap. I left all X11 libraries off the system to save disk space. After compiling a kernel containing just the modules I needed to run the system, I created a rescue boot floppy with this same kernel. There's no reason not to be prepared for the worst. Finally, I dubbed the system bartleby, after the downtrodden scrivener in the Dead Letter Office of Herman Melville's short story Bartleby the Scrivener.
The next step was to choose an appropriate location for bartleby to live. Simply placing bartleby in the same room as tuccster would provide adequate backup if tuccster crashed of its own accord again. However, by now I had disaster recovery on the brain and wanted to protect against other sorts of failures. I finally settled on a location in our mainframe room, which is located in a different building. This placed bartleby on a different subnet than tuccster, so no e-mail would be lost if our own subnet failed. Being in the mainframe room also meant that bartleby was located on a protected power system.
Long after tuccster has been automatically shut down by its UPS, bartleby will be merrily spooling any incoming e-mail. The wisdom of placing the system on a separate subnet was proven several months later when our own subnet was accidently disconnected by maintenance workers. With bartleby properly in place, all that was left was adding the appropriate entry to our DNS and configuring sendmail and the web server.
Just as in my example with the Three Stooges, I wanted to add MX records to our domain name server so that both bartleby and tuccster were listed as MX hosts for tuccster, with tuccster as the preferred host. The two MX records as I entered them are shown in Listing 1.
While configuring sendmail is often something to fear, it was easy in this case. sendmail will correctly process deferred e-mail by default. All that was necessary was configuring sendmail to send and receive e-mail directed to and from bartleby, something handled automatically by the Debian package install script. Once I finished the sendmail install, bartleby was working correctly as a fall-back e-mail server. The next time I brought tuccster down for maintenance, I examined the sendmail queue on bartleby using the sendmail -bp command. Sure enough, several e-mail messages were in the queue, waiting for tuccster to wake up again. After I brought tuccster back on-line, I flushed the queue using the sendmail -q command.
It is possible to adjust the frequency with which sendmail attempts to process any spooled messages by appending a time argument to the -q command when sendmail is first started. The Debian package for sendmail includes a macro at the beginning of the sendmail startup script that makes setting this value easy. I set mine to 30 minutes (the actual command-line argument is -q30m). The sendmail man page describes the syntax for the -q command. I automatically shut down the Exchange server late each Friday night to save its message store to tape. I wanted bartleby to flush its queue automatically so any messages that were deferred during the backup would arrive properly without human intervention. You may wish to lengthen the time between queue flushes.
The system as I had it set up was pretty good. However, it did require at least a minimal understanding of the UNIX shell to log in and check or flush the sendmail queue. I wanted to make the system as foolproof as possible in the event something happened while I was away from the office. I settled on a simple CGI script written in Perl. This script (see Listing 2) displays the current contents of the sendmail queue, as well as the output from the df command. This allows someone to see at a glance what e-mail messages are spooled in the system and how much disk space remains. If the sendmail queue is not empty, a link is created that can be clicked to flush the queue.
Since this is the only service this web server provides, I called it “index.cgi” and placed it in the root of the server directory. Adding “index.cgi” to the DirectoryIndex list in the srm.conf configuration file causes the script to be executed automatically when the server is accessed. I also adjusted the MinSpareServers and MaxSpareServers values in the httpd.conf file, so that no spare servers are left running. In this case, I don't mind slowing the response time to free up a little extra memory. There were a few final security issues to contemplate. I consider the listing of the sendmail queue to be “sensitive but unclassified” information. By this, I mean I don't feel an intruder would find the information contained therein particularly useful, but then again I don't see the need to make the information easily available to the entire Internet. The action of flushing the queue is nearly harmless on its own, though I suppose it would be possible to mount a denial-of-service attack by repeatedly flushing the queue. In the end, I decided to allow non-password based access from a handful of machines in the department.
Each of these machines is located in the office of an administrator who might have a reason to check the queue. I implemented these restrictions with the Apache directives shown in Listing 3 in the access.conf file.
I could have added the standard plaintext password authentication available through nearly every web browser, but I don't believe it would have added enough real security to offset the inconvenience. A screen shot of this web interface is shown below.
I decided to restrict shell access to myself. Furthermore, I installed the s/key system from Bell Labs to avoid plaintext passwords and used TCP Wrappers to restrict the hosts that can log into the system. While there is little to be gained (other than, perhaps, traffic analysis) from examining the sendmail queue, much could be gained from unrestricted access to any incoming e-mail message.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Interview with Patrick Volkerding
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide