Broadly speaking, we can identify two types of backup; the system backup, which is a backup of the operating system and applications (the things only the sysadmin can alter), and the user backup, which is a backup of the users' files (I don't know if anyone else uses these terms, but they'll do for the purposes of this article). As we shall see, system backups and user backups should be treated differently.
The reason for making system backups is to minimize the effort required, following a crash, to get the system up and running as it was before disaster struck. However, you don't want to spend half your life backing up your disk; no one said it was fun! The key to backing up effectively is to back up only that which is absolutely necessary to enable speedy recovery when disaster strikes.
Think about it: most of your system is pretty stable—the contents of /usr/bin don't change that often, do they? To make things even easier, you probably have a rough copy of your system already; most people install Linux from a distribution of some sort, then make their own customizations. The original distribution is likely to be the starting point of a recovery for many of us.
Linux differs from most other operating systems in that the operating system and a large number of applications are typically installed in one go, whereas DOS-based systems and even Unix-based systems other than Linux tend to be installed in a more piece-wise fashion; first the operating system, then each application, one-by-one. For those systems, it makes sense to back up the whole system; usually a lot of time and care has been invested in setting the system up in the first place. By contrast, installing or re-installing a basic Linux system (complete with applications) is usually a quick and painless affair.
Having just said that most of your system is pretty stable, let's consider what is likely to change. One way you will customize your system is by adding new programs (software that didn't come as part of your distribution). When installing new software, you should be strict with yourself, and keep any new programs separate from those on the distribution. The best place for them is in the /usr/local hierarchy. As its name suggests, /usr/local is designed to contain programs that are local to your system. The advantage in doing this is that you can easily see which programs you can restore from your distribution, and which programs you need to restore from elsewhere.
Another thing you are likely to change is the configuration files the standard programs use. The behaviour of many standard Linux utilities is controlled by simple text files, which you can edit to tailor your system to your requirements. Sometimes distributions will “invisibly” edit some of these text files for you, based on your responses to certain questions, but often you have to edit them yourself.
A lot of the important files live in the /etc directory:
/etc/printcap—describes how to communicate with your printers
/etc/fstab—describes what file-systems you have
/etc/passwd—contains a list of all users, and their (encrypted) passwords
/etc/inittab—tells init how to set the system up for a given run level
/etc/XF86Config—describes the initial setup of XFree86
Depending on your system, there are likely to be many others as well. As you can see, the /etc directory is very important, and the files it contains are likely to be the result of hours of work. I don't know if I'm typical, but I spent a long time just getting XF86Config exactly how I want it. The thought of going through that again is enough to make me shudder. Of course, some programs will use files in other places, but most of the basic Linux system is configured using files in /etc.
When you modify the configuration files used by an existing program, you can't move them somewhere else; the program (usually) looks for them in a particular place. Therefore, it is important to keep track of what changes you've made, so that, should disaster strike, you can get them back easily. Make a note of all the modifications you make to the system, no matter how trivial they seem at the time.
The best tool for the job is a pen and some paper. Write yourself long descriptions of what you've done, and why. Don't fall into the trap of thinking that in six months time you'll remember just how you got application Y to compile, or what the printcap entry to filter your postscript files through ghostscript was, because the chances are you won't. Even if you are installing new software in a separate directory so it's easy to keep track of, it won't hurt to write down what you installed, when you installed it, and if there were any things that didn't seem obvious at the time.
Now that we've identified what kind of system files we need to back up, let's consider how often. Just after you've made a change is probably the most important time, but don't forget to keep a backup of how the system was before the latest change, just in case things do go wrong later because of your change. The point is that things only change when you change them, which probably isn't very often, and the frequency of your backups should reflect this.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- Interview with Patrick Volkerding
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script