Ideal Backups with zbackup

Data is growing both in volume and importance. As time goes on, the amount of data that we need to store is growing, and the data itself is becoming more and more critical for organizations. It is becoming increasingly important to be able to back up and restore this information quickly and reliably. Using cloud-based systems spreads out the data over many servers and locations.

Where I work, data has grown from less than 1GB on a single server to more than 500GB spread out on more than 30 servers in multiple data centers. Catastrophes like the events at Distribute IT and Code Spaces demonstrate that ineffective backup practices can destroy a thriving business. Enterprise-level backup solutions typically cost a prohibitive amount, but the tools we need to create a backup solution exist within the Open Source community.

zbackup to the Rescue

After switching between many different backup strategies, I have found what is close to an ideal backup solution for our particular use case. That involves regularly backing up many machines with huge numbers of files as well as very large files and being able to restore any backup previously made.

The solution combines zbackup, rsync and LVM snapshots. zbackup works by deduplicating a stream—for example, a tar or database backup—and storing the blocks into a storage pool. If the same block ever is encountered again, the previous one is reused.

Combining these three elements gives us a solution that provides:

  • Multiple versions: we can store complete snapshots of our system every hour, and deduplication means the incremental storage cost for each new backup is negligible.

  • Storing very large files: database backups can be very large but differ in small ways that are not block-aligned (imagine inserting one byte at the beginning of a file). Byte-level deduplication means we store only the changes between the versions, similar to doing a diff.

  • Storing many small files: backing up millions of files gives a much smaller number of deduplicated blocks that can be managed more easily.

  • Easily replicating between disks and over a WAN: the files in the storage pool are immutable; new blocks are stored as new files. This makes rsyncing them to other drives or machines very fast and efficient. It also means we can synchronize them to virtually any kind of machine or file storage.

  • Compression: compressing files gives significant size reductions, but using it often stops rsync or deduplication from working. zbackup compresses the blocks after deduplication, so rsyncing is still efficient. As mentioned previously, only new blocks need to be rsynced.

  • Fast backups: backups after the first one are done at close to the disk-read speed. More important, by running zbackup on each server, the majority of the CPU and I/O load is decentralized. This means there is minimal CPU or I/O required on the central server and only deduplicated blocks are transferred, providing scalability.

  • Highly redundant: by synchronizing to external drives and other servers, even corruption or destruction of the backups means we can recover our information.

Comparing Alternatives

There are many alternatives to using zbackup. I compare some of the options below:

  • tape: has a relatively high cost, and takes a long time to read and write as the entire backup is written. This is a good option for archival storage, but it is unsuitable for frequent snapshots because you can't write a 500GB tape every hour.

  • rsnapshot: does not handle small changes in large files in any reasonable way, as a new copy is kept for each new version. Taking snapshots of large numbers of files causes a huge I/O load on the central backup server when they are copied and when they are deleted. It is also very slow to synchronize the hard links to another device or machine.

  • Tarsnap: this is an excellent product and very reasonably priced. Slow restores and being dependent on a third party for storage make this a good fallback option but possibly unsuitable as your only method of backup.

  • Git: doesn't handle large files efficiently (or in some cases fails completely). It also doesn't easily handle anything with Git control files in it, so it makes backing up your Git repositories a real challenge. As Git is so poor at large files, tarring directories and using the tar file is not feasible.

  • ZFS/BTRFS: filesystem snapshots are very fast and work well for small files. Even the smallest change in a file requires the file to be re-copied (this is not strictly true for ZFS if deduplicating is enabled; however, this has a significant memory load and it works only if the file is unchanged for most of its blocks, like an Mbox file or database backing store).

  • Duplicity: this seems similar to zbackup and has many of the same benefits, except deduplicating between files with different names. Although it has been in beta for a long time, it seems to have many features for supporting remote back ends, whereas zbackup is simply a deduplicating block store.

Summary of Approach

The key part of this approach is using zbackup in step 1. The backups produced by zbackup have remarkable properties compared to the other backup formats, as discussed previously, so that the remaining steps can be tailored depending on the level of availability and durability you need.

  1. Each virtual server uses zbackup to back up to a local deduplicated block store. This means every snapshot is available locally if needed.

  2. The zbackup store then is replicated to a central backup server where it can be recovered if needed.

  3. The zbackup stores on the central server are replicated out to other servers.

  4. The backups also are synchronized to external storage—for example, a USB drive. We rotate between drives so that drives are kept off-site and in case of disaster or backup corruption.

  5. Finally, we snapshot the filesystem where the zbackup stores are located.

Using zbackup

zbackup fits right into the UNIX philosophy. It does two seemingly simple things that make it behave almost like a file. The first is taking a stream of data passed to stdin and writing it to a block store. A handle to the data is stored in a small backup file, stored next to the block store. The second is taking that backup file and writing the original data to stdout.

During the process, zbackup will identify blocks of data that it has seen before and deduplicate it and then compress any new data before writing it out to disk. When deduplicating data, zbackup uses a sliding window that moves a byte at a time, so that if you insert a single byte into a file, it still can identify the repeated blocks. This is in contrast to block-level deduplication like that found in ZFS.

To start using zbackup, you must install it from source. This is very easy to do; just follow the instructions on the http://zbackup.org Web site.

Assuming you have installed zbackup, and that /usr/local/bin is in your path, start by initializing a block store (in these examples, I am running as root, but that is not a requirement):


# zbackup init --non-encrypted  /tmp/zbackup/

Hopefully you don't use /tmp for your real backups! You can list out the block store as below—the Web site has great information on what goes where. The main one to keep in mind is backups; this is where your backup files go:


# ls /tmp/zbackup 
backups  bundles  index  info

Let's back up a database backup file—this takes a while the first time (Listing 1).

Listing 1. Backing Up One File

# ls -l /tmp/database.sql
-rw-r--r-- 1 root root 406623470 Sep 14 17:41 /tmp/database.sql
# cat /tmp/database.sql | zbackup backup 
 ↪/tmp/zbackup/backups/database.sql
Loading index...
Index loaded.
Using up to 8 thread(s) for compression

To check where that went, look at Listing 2. As you can see, the backup file is only 135 bytes. Most of the data is stored in /bundles, and it is less than one tenth the size of the original database.

Listing 2. Check the Backup

# ls -l /tmp/zbackup/backups/database.sql
-rw------- 1 root root 135 Sep 14 17:43 
 ↪/tmp/zbackup/backups/database.sql
# du --max-depth=1 /tmp/zbackup/
8       /tmp/zbackup/backups
208     /tmp/zbackup/index
29440   /tmp/zbackup/bundles

Now, make a small change to the backup file to simulate some use and then back it up again (see Listing 3). This example illustrates an important point, that zbackup will not change any file in the data store. You can rename the files in the /backup directory if you choose. You also can have subdirectories under /backups, as shown in Listing 4, where the backup finally works.

Listing 3. Backing Up a File Again

# cat /tmp/database.sql | zbackup --silent backup
 ↪/tmp/zbackup/backups/database.sql
Won't overwrite existing file /tmp/zbackup/backups/database.sql
Listing 4. Backing Up a File, Part 2

# mkdir -p /tmp/zbackup/backups/1/2/3/
# cat /tmp/database.sql | zbackup --silent backup
 ↪/tmp/zbackup/backups/1/2/3/database.sql

This should complete much more quickly, both because the file is cached and because most of the blocks already have been deduplicated:


# du --max-depth=0 /tmp/zbackup/ 
29768	/tmp/zbackup/

In this example, the changes I made to the file have only slightly increased the size of the backup.

Let's now restore the second backup. Simply pass the backup handle to zbackup restore, and the file is written to stdout:


# zbackup restore /tmp/zbackup/backups/1/2/3/database.sql > 
 ↪/tmp/database.sql.restored

Now you can check the file you restored to prove it is the same as the file you originally backed up (Listing 5).

Listing 5. Checking the Restored File

# ls -l /tmp/database.sql*
-rw-r--r-- 1 root root 406622180 Sep 14 17:47 /tmp/database.sql
-rw-r--r-- 1 root root 406622180 Sep 14 17:53 
 ↪/tmp/database.sql.restored
# md5sum /tmp/database.sql*
179a33abbc3e8cd2058703b96dff8eb4  /tmp/database.sql
179a33abbc3e8cd2058703b96dff8eb4  /tmp/database.sql.restored

Of course, in most cases, you aren't backing up a single file. This is where the UNIX philosophy works well—because tar can read from stdin and write to stdout, you simply can chain zbackup to tar. Listing 6 shows an example of backing up a large directory structure in /tmp/files/ using tar piped to zbackup.

Listing 6. tar and Back Up a Directory

# tar -c /tmp/files | zbackup 
 ↪--silent backup /tmp/zbackup/backups/files.tar
# du --max-depth=0 /tmp/zbackup
97128   /tmp/zbackup

Now there are two backups of the database file and a tarred backup of /tmp/files in the one zbackup store. There is nothing stopping you from calling your backup file files.tar.gz or anything else; however, this is going to be very confusing later on. If you name your backup file based on the name of the file to which it restores, it makes it much easier to work out what each backup is.

Now you can restore this backup using the example in Listing 7. Most of the example is creating the directory to restore to and comparing the restored backup to the original.

Listing 7. Restoring from zbackup

# mkdir /tmp/files.restore
# cd /tmp/files.restore/
# zbackup --silent restore /tmp/zbackup/backups/files.tar | tar -x
# diff -rq /tmpfiles.restore/tmp/files/ /tmp/files/

If you are backing up frequently, it makes sense to organize your backups in directories by date. The example in Listing 8 has a directory for each month, then a subdirectory for each day and, finally, a subdirectory for each time of day—for example, 2014-09/12/08:30/—and all the backups for that time go in this directory.

Listing 8. Organize Your Backups

# export DATEDIR=`date "+%Y-%m/%d/%H:%M"`
# mkdir -p /tmp/zbackup/backups/$DATEDIR
# tar -c /tmp/files | zbackup --silent backup 
 ↪/tmp/zbackup/backups/$DATEDIR/files.tar
# cat /tmp/database.sql | zbackup backup
 ↪/tmp/zbackup/backups/$DATEDIR/database.sql

Run this on a daily or hourly basis, and you can restore any backup you have made, going back to the beginning of time. For the files I am backing up, the zbackup data for an entire year is less than storing a single uncompressed backup.

The zbackup directory has the extremely nice property that the files in it never change once they have been written. This makes it very fast to rsync (since only new files in the backup need to be read) and very fast to copy to other media like USB disks. It also makes it an ideal candidate for things like filesystem snapshots using LVM or ZFS.

Once you have your backups in zbackup, you can ship it to a central server and drop it to USB or tape, or upload it to Amazon S3 or even Dropbox.

Benchmarks/Results

All this is good in theory, but the critical question is "How does it perform?" To give you an idea, I have run some benchmarks on a server that has multiple similar versions of the same application—for example, training, development, UAT. There are roughly 5GB of databases and 800MB of Web site files. The server has eight cores and plenty of memory, although all buffers were flushed prior to each benchmark.

All Web Sites: this is a collection of 30,000 files taking roughly 800MB of space. Table 1 illustrates the results. zbackup delivers a backup that is roughly a quarter of the size of the gzipped tar file. Each new backup adds three files—by design, zbackup never modifies files but only adds them.

Table 1. Multiple Web Sites
Space Time Files
tar 743M 25s 1
tar & gzip 382M 44s 1
zbackup 105M 38s 203
zbackup 2 4K 30s 206
zbackup 3 632k 30s 209

The first time zbackup runs and backs up the entire directory, it takes longer, as there is no deduplicated data in the pool. On the first run, all eight cores were fully used. On slower machines, throughput is less due to the high CPU usage.

The second time, zbackup was run over an identical file structure, only 4k of additional storage was used. The backup also runs faster because most of the data already is present.

The third time, four files of exactly 100,000 random bytes were placed in the filesystem.

Single Web Site: the compression performance of zbackup in the first test is in large part because there are multiple similar copies of the same Web site. This test backs up only one of the Web sites to provide another type of comparison.

The results are shown in Table 2. The compression results are not much better than gzip, which demonstrates how effective the deduplication is when doing multiple Web sites.

Table 2. Single Web Site
Space Time Files
tar 280M 8s 1
tar & gzip 74M 9s 1
zbackup 66M 17s 131

Database Files: this is a backup of a database dump file, text format uncompressed. The results are shown in Table 3.

Table 3. Database File
Space Time Files
tar 377M 2s 1
tar & gzip 43M 10s 1
zbackup 29M 32s 192
zbackup 2 4M 3s 200
zbackup 3 164K 3s 210

The first run is zbackup backing up a testing database of 377M. The deduplication and compression give significant gains over tar and gzip, although it runs much slower.

The second zbackup was a training database that is similar to the testing database, but it has an additional 10MB of data, and some of the other data also is different. In this case, zbackup very effectively removes the duplicates, with very little extra storage cost.

The final zbackup was randomly removing clusters of rows from the backup file to simulate the changes that come from updates and deletes. This is the typical case of backing up a database over short periods of time, and it matches very closely with my observation of real-word performance.

Network Performance: by design, zbackup does not modify or delete files. This means the number of added files and the additional disk space is all you need to synchronize over the network. Existing files never need to be updated.

Rather than benchmarking this, I have reviewed the real logs for our server. Synchronizing 6GB of data with more than 30,000 files typically takes less than ten seconds. Compared with the previous method of rsyncing the directory tree and large files that used to take between one to three minutes, this is an enormous improvement.

The central server has a slow disk and network; however, it is easily able to cope with the load from synchronizing the zbackup. I suspect even a Raspberry Pi would have enough performance to act as a synchronization target.

As they say, your mileage may vary. There are many factors that can alter the performance you get, such as:

  • Disk speed.

  • CPU performance (which is particularly important for the first backup).

  • Nature of the files—for example, binary database backups will compress less than text backups.

  • Existence of multiple copies of the same data.

Data Integrity and Security

Deduplicating the data, zbackup is particularly vulnerable to file corruption. A change to a single file could make the entire data store useless. It is worthwhile to check your media to ensure they are in good condition. On the plus side, you probably can copy an entire year's worth of backups of 200GB of data to another disk in less than an hour.

Having multiple versions of backups available in the same zbackup store is not the same as having multiple copies. Replicating your zbackup store to other disks or servers does not solve the problem. As an example, if someone were to modify some files in the backup store, and then that was blindly replicated to every machine or disk, you would have many exact copies of a worthless backup.

For that reason, I include snapshots of the filesystem to guard against this and also rotate our media and regularly check the backups. As an alternative, you could rsync just new files from the server being backed up and ignore deletions or file updates.

The design of zbackup means that retrieving a backup also checks it for consistency, so it is worthwhile to try restoring your backups on a regular basis.

Another point to consider is whether there is a single company, credential or key that, if compromised, could cause the destruction of all your backups. Although it is useful to have multiple media and servers, if a single hacker can destroy everything, you are vulnerable in the same way the two companies mentioned in the introduction were. Physical media that is rotated off-site is a good way to achieve this, or else a separate server with a completely different set of credentials.

zbackup makes it relatively simple to encrypt the data stored in the backup. If you are storing your backups on insecure or third-party machines, you may want to use this facility. When managing backups for multiple servers, I prefer to encrypt the media where the backups are stored using LUKS. This includes the drives within the servers and the removable USB drives.

Other Considerations

It is particularly important that you don't compress or encrypt your files as part of a process before you pass them to zbackup. Otherwise, you will find the deduplication will be completely ineffective. For example, Postgres allows you to compress your backups when writing the file. If this option were used, you would get no benefit from using zbackup.

In the architecture here, I have suggested doing the zbackup on each server rather than centralizing it. This means that although duplicates within a server are merged, duplicates between servers are not. For some applications, that may not be good enough. In this case, you might consider running zbackup on the virtualization host to deduplicate the disk files.

zbackup and tar are both stream-oriented protocols. This means that restoring a single file requires the computer to restore the entire backup and untar only the file you require. For small backups, this may be fine, but if your directory structures are very large, it may be worthwhile to back up directories individually rather than in one go. For example, you might choose to back up Web sites individually.

zbackup currently is limited by the speed at which the data can be read in and streamed to the deduplication process. A file must be read in full and then deduplicated even if it hasn't changed. This is roughly equivalent to rsync -c (that is, checksum the file content rather than just comparing the file metadata). To scale to really large data sizes, zbackup may need to incorporate some of the tar facilities within itself, so that if it can determine a file hasn't changed (by inode and metadata), it deduplicates the file without reading it.

Load Disqus comments