Easy Database Backups with Zmanda Recovery Manager for MySQL
Recently, I had a chance to test the community edition of Zmanda Recovery Manager for MySQL. I was partly testing to make sure it worked with MariaDB, Monty Program's drop-in replacement for MySQL, but I also was testing to see whether it would work well for our own database backups.
Monty Program has arguably the most in-depth knowledge of the MySQL codebase on the planet. But apart from some large servers we use for performance and other testing, our actual database usage and needs are similar to many other small- to medium-size companies. For our Web sites, we need only a couple small database servers. The datasets for each database are not large, a couple hundred megabytes each. But, we still don't want to lose any of it, so backups are a must.
I've long used a homegrown set of shell scripts to manage backing up our databases. They're simple and work well enough to get the job done. They lack some features that I've never gotten around to implementing, such as automated retention periods and easy restores. The setup process also is more involved than I would prefer. They get the job done, but I've always wanted something a little better, and I've never had the time to do it myself.
This is where Zmanda Recovery Manager for MySQL enters the picture. ZRM Enterprise edition was reviewed by Alolita Sharma in the September 2008 issue of Linux Journal, but I'm never very interested in Enterprise editions. They always have proprietary bits, and I've never trusted GUI tools as much as their command-line counterparts. Luckily, where there is an “enterprise” version there almost always is a “community” version lurking somewhere in the shadows.
Like many other community editions, Zmanda Recovery Manager for MySQL, Community Edition (let's just call it ZRM) lacks the “flashy bits”. Things like the graphical “console” application, Windows compatibility, 24x7 support and other high-profile features of its Enterprise sibling are missing in the Community version. But the essentials are there, and it has one big feature I like: it is fully open source (GPL v2). The key metric, however, is does it do what I need it to do?
To find out, I set up a small test environment (I didn't want to test on live, production databases) and gave it a spin. See the Setting Up a Test Environment sidebar for details on what I did prior to installing and testing ZRM.
Setting Up a Test Environment
For testing and evaluating ZRM, I set up three virtual servers running Ubuntu 10.04 LTS, installed MariaDB on them, and then downloaded the example “employees” test database from launchpad.net/test-db.
Installing MariaDB is easy on Debian, Ubuntu and CentOS because of some MariaDB repositories maintained by OurDelta.org. The site has instructions on how to configure your system to use the repositories. For Ubuntu 10.04, I did the following:
1. Added the following lines to /etc/apt/sources.list.d/mariadb.list:
# MariaDB OurDelta repository for Ubuntu 10.04 "Lucid Lynx" deb http://mirror.ourdelta.org/deb lucid mariadb-ourdelta deb-src http://mirror.ourdelta.org/deb lucid mariadb-ourdelta
2. Added the repository key to apt:
apt-key adv --recv-keys \ --keyserver keyserver.ubuntu.com 3EA4DFD8A29A9ED6
3. Updated apt and installed mariadb-server:
apt-get update apt-get install mariadb-server
Installing mariadb-server looks and acts just like installing mysql-server.
With the database server installed, I loaded the test database. To load the employees test database into MariaDB, I first downloaded and untarred it and then used the mysql command-line program to load it into MariaDB like so:
tar -jxvf employees_db-full-1.0.6.tar.bz2 cd employees_db/ mysql -u root -p -t < employees.sql
The employees test database uses a couple hundred megabytes of disk space. This is in line with the size of our “real” databases. But more important than the comparative size, the employees test database comes with a handy verification script that lets me test that the data is correct. Verifying the data is as simple as this:
mysql -u root -p -t < test_employees_sha.sql
With the test database servers set up, I then created a fourth virtual machine with a base Ubuntu Server install on it to act as my backup server. Now I was ready to test using ZRM for backup and recovery with the ability to verify that the recovery was successful.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Linux Systems Administrator
- New Products
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Have you tried Boxen? It's a
3 hours 48 min ago
- seo services in india
8 hours 19 min ago
- For KDE install kio-mtp
8 hours 20 min ago
- Evernote is much more...
10 hours 20 min ago
- Reply to comment | Linux Journal
19 hours 6 min ago
- Dynamic DNS
19 hours 40 min ago
- Reply to comment | Linux Journal
20 hours 38 min ago
- Reply to comment | Linux Journal
21 hours 28 min ago
- Not free anymore
1 day 1 hour ago
1 day 5 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?