A Linux-Based Automatic Backup System
After all the scripts have been written, you can put a symbolic link to the master script in one of the /etc/cron.d subdirectories so that the computer will take care of the backups automatically. For my setup, I typed ln -s /root/backup/master /etc/cron.d/weekly/master to set automatic weekly backups. You can back up on a daily basis if you need to since the update option of archiving utilities minimizes resource requirements.
The first usage of a backup script, however, will require a lot of network bandwidth and CPU time. Hence, you may want to consider running it for the first time by hand or with the at command at night.
Five important points should be noted:
Any shell script with passwords should be made unreadable by anyone but the owner by using the chmod go-r command.
If your data is very sensitive, you need to set up adequate security measures to keep industrial spies from hacking into your Linux machine and stealing your centralized data. See the Linux security HOWTO for more information.
The smbmount program tends to vary slightly across different distributions of Linux. Hence, if the scripts in this article don't work quite right for you, check out the man pages to see how your version of smbmount handles its command-line options.
Users of the Windows computers must be taught to keep their data under a central directory, such as “users” or “data”, instead of several random directories spread across the hard drives. Some people are too lazy to move their files into a central directory, despite the fact that it takes only five seconds. You may have to actually move their files yourself before they will even start using the centralized directory. Remember, though, that these users may be the greatest threat to your organization in terms of data loss since they never bother to make backup copies of their own data.
Finally, a hard drive is a very practical place to put the backups of irreplaceable data. My archive files use less than 400MB of hard disk and contain more than a 1.5GB worth of data. However, you may want to consider obtaining a large-capacity, removable drive for your Linux machine. With this, you can occasionally copy the archive files from your hard disk to a removable disk and take them home in case of physical destruction or theft of the machine.
A Linux-based network backup system for irreplaceable data files on many networked computers is inexpensive, reliable, easy to set up, trivial to expand and extremely practical. With just an hour of time you can potentially save your group or company many thousands of dollars in the case of a hard drive crash. Currently, my Pentium 150 workstation keeps archives of years of mission-critical data from eight computers spread across three buildings and two subnets. It takes me less than two minutes to add a new computer to the system due to the use of shell variables in the scripts.
This is the kind of task Linux was born to do. You can take an old surplus computer, make it “headless” with no keyboard or monitor and stick it somewhere in a closet where it will humbly do its work unseen. You can also run it on your personal workstation since the Linux tools can run in the background. You can set up an FTP server on the Linux machine on the fly if you need to restore files to a crashed computer or simply take the hard drive out and stick it inside a Windows machine. Since Linux has been designed to coexist with many different computers and operating systems, one can adapt the scripts to back up many different kinds of computers, including other Linux machines via NFS and even MacIntosh computers with the netatalk and hfs packages.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- SUSE LLC's SUSE Manager
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space