Backup Strategy

Everyone tells you how important it is to make backups. Explicit guidelines, however, are often lacking. Which files should you back up, and how often? This article will help you answer those questions, and use the answers to develop your own backup strategy.

Broadly speaking, we can identify two types of backup; the system backup, which is a backup of the operating system and applications (the things only the sysadmin can alter), and the user backup, which is a backup of the users' files (I don't know if anyone else uses these terms, but they'll do for the purposes of this article). As we shall see, system backups and user backups should be treated differently.

System backups

The reason for making system backups is to minimize the effort required, following a crash, to get the system up and running as it was before disaster struck. However, you don't want to spend half your life backing up your disk; no one said it was fun! The key to backing up effectively is to back up only that which is absolutely necessary to enable speedy recovery when disaster strikes.

Think about it: most of your system is pretty stable—the contents of /usr/bin don't change that often, do they? To make things even easier, you probably have a rough copy of your system already; most people install Linux from a distribution of some sort, then make their own customizations. The original distribution is likely to be the starting point of a recovery for many of us.

Linux differs from most other operating systems in that the operating system and a large number of applications are typically installed in one go, whereas DOS-based systems and even Unix-based systems other than Linux tend to be installed in a more piece-wise fashion; first the operating system, then each application, one-by-one. For those systems, it makes sense to back up the whole system; usually a lot of time and care has been invested in setting the system up in the first place. By contrast, installing or re-installing a basic Linux system (complete with applications) is usually a quick and painless affair.

Having just said that most of your system is pretty stable, let's consider what is likely to change. One way you will customize your system is by adding new programs (software that didn't come as part of your distribution). When installing new software, you should be strict with yourself, and keep any new programs separate from those on the distribution. The best place for them is in the /usr/local hierarchy. As its name suggests, /usr/local is designed to contain programs that are local to your system. The advantage in doing this is that you can easily see which programs you can restore from your distribution, and which programs you need to restore from elsewhere.

Another thing you are likely to change is the configuration files the standard programs use. The behaviour of many standard Linux utilities is controlled by simple text files, which you can edit to tailor your system to your requirements. Sometimes distributions will “invisibly” edit some of these text files for you, based on your responses to certain questions, but often you have to edit them yourself.

A lot of the important files live in the /etc directory:

  • /etc/printcap—describes how to communicate with your printers

  • /etc/fstab—describes what file-systems you have

  • /etc/passwd—contains a list of all users, and their (encrypted) passwords

  • /etc/inittab—tells init how to set the system up for a given run level

  • /etc/XF86Config—describes the initial setup of XFree86

Depending on your system, there are likely to be many others as well. As you can see, the /etc directory is very important, and the files it contains are likely to be the result of hours of work. I don't know if I'm typical, but I spent a long time just getting XF86Config exactly how I want it. The thought of going through that again is enough to make me shudder. Of course, some programs will use files in other places, but most of the basic Linux system is configured using files in /etc.

When you modify the configuration files used by an existing program, you can't move them somewhere else; the program (usually) looks for them in a particular place. Therefore, it is important to keep track of what changes you've made, so that, should disaster strike, you can get them back easily. Make a note of all the modifications you make to the system, no matter how trivial they seem at the time.

The best tool for the job is a pen and some paper. Write yourself long descriptions of what you've done, and why. Don't fall into the trap of thinking that in six months time you'll remember just how you got application Y to compile, or what the printcap entry to filter your postscript files through ghostscript was, because the chances are you won't. Even if you are installing new software in a separate directory so it's easy to keep track of, it won't hurt to write down what you installed, when you installed it, and if there were any things that didn't seem obvious at the time.

Now that we've identified what kind of system files we need to back up, let's consider how often. Just after you've made a change is probably the most important time, but don't forget to keep a backup of how the system was before the latest change, just in case things do go wrong later because of your change. The point is that things only change when you change them, which probably isn't very often, and the frequency of your backups should reflect this.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix