An Automated Reliable Backup Solution
After setting up SSH key authentication, I created a GnuPG key that Duplicity would use to sign and encrypt the backups. I created a key as my normal user on the client machine. Having the GnuPG key associated with a normal user account prevents backing up the entire filesystem. If I decided at some point that I wanted to back up the entire filesystem, I simply would create a GnuPG key as the root user on the client machine. To generate a GPG key, I used the following command:
$ gpg --gen-key
Once both the GnuPG and SSH keys were created, the first thing I did was make a CD containing copies of both my SSH and GnuPG keys. Then I installed and set up Keychain. Keychain is an application that manages long-lived instances of ssh-agent and gpg-agent to provide a mechanism that eliminates the need for password entry for every command that requires either the GnuPG or SSH keys. On a Debian client machine, I first had to install the keychain and ssh-askpass packages. Then I edited the /etc/X11/Xsession.options file and commented out the use-ssh-agent line so that the ssh-agent was not started every time I logged in with an Xsession. Then I added the following lines to my .bashrc file to start up Keychain properly:
/usr/bin/keychain ~/.ssh/id_dsa 2> /dev/null source ~/.keychain/`hostname`-sh
After that, I added an xterm instantiation to my gnome-session so that an xterm in turn starts an instance of bash, which reads in the .bashrc file and runs Keychain. When Keychain is executed, it checks to see whether the key is already cached; if it is not, it prompts me once for my key passwords every time I start my computer and log in.
Once Keychain was installed and configured, I was able to make unattended backups of directories simply by configuring cron to execute Duplicity. I backed up my home directory with the following command:
$ duplicity --encrypt-key AA43E426 \ --sign-key AA43E426 /home/username \ scp://user@backup_serv/backup/home
After backing up my home directory, I verified the backup with the following command:
$ duplicity --verify --encrypt-key AA43E426 \ --sign-key AA43E426 \ scp://user@backup_serv/backup/home \ /home/username
Suppose that I accidentally removed my home directory on my client machine. To recover it from the backup server, I would use the following command:
$ duplicity --encrypt-key AA43E426 \ --sign-key AA43E426 \ scp://user@backup_serv/backup/home \ /home/username
However, my GnuPG and SSH keys are normally stored in my home directory. Without the keys I cannot recover my backups. Hence, I first recovered my GPG and SSH keys from the CD on which I previously saved my keys.
This solution also provides the capability of cleaning up files on the backup server for a specified date and time. Given this capability, I also added the following command to my cron tab to remove any backups more than two months old:
$ duplicity --remove-older-than 2M \ --encrypt-key AA43E426 --sign-key AA43E426 \ scp://user@backup_serv/backup/home \ /home/username
This command conserves disk space, but it limits how far back I can recover data.
This solution has worked very well for me. It provides the key functionality that I need and meets all of my requirements. It is not perfect, however. Duplicity currently does not support hard-links; it treats them as individual files. Hence, in a backup recovery that contains hard-links, individual files are produced rather than one file with associated hard-links.
Despite Duplicity's lack of support for hard-links, this is still my choice of backup solution. It seems that development of Duplicity has recently picked up, and maybe this phase of development will add hard-link support. Maybe I will find the time to add this support myself. Either way, this provides an unattended, encrypted, redundant network backup solution that takes very little money or effort to set up.
Andrew J. De Ponte is a security professional and avid software developer. He has worked with a variety of UNIX-based distributions since 1997 and believes the key to success in general is the balance of design and productivity. He awaits comments and questions at firstname.lastname@example.org.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- RSS Feeds
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
9 min 21 sec ago
- Reply to comment | Linux Journal
31 min 40 sec ago
- Android has been dominating
36 min 12 sec ago
- It is quiet helping
3 hours 21 min ago
3 hours 39 min ago
- Reachli - Amplifying your
4 hours 55 min ago
5 hours 44 min ago
- good point!
5 hours 47 min ago
- Varnish works!
5 hours 56 min ago
- Reply to comment | Linux Journal
6 hours 25 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?