Easy Database Backups with Zmanda Recovery Manager for MySQL
Recently, I had a chance to test the community edition of Zmanda Recovery Manager for MySQL. I was partly testing to make sure it worked with MariaDB, Monty Program's drop-in replacement for MySQL, but I also was testing to see whether it would work well for our own database backups.
Monty Program has arguably the most in-depth knowledge of the MySQL codebase on the planet. But apart from some large servers we use for performance and other testing, our actual database usage and needs are similar to many other small- to medium-size companies. For our Web sites, we need only a couple small database servers. The datasets for each database are not large, a couple hundred megabytes each. But, we still don't want to lose any of it, so backups are a must.
I've long used a homegrown set of shell scripts to manage backing up our databases. They're simple and work well enough to get the job done. They lack some features that I've never gotten around to implementing, such as automated retention periods and easy restores. The setup process also is more involved than I would prefer. They get the job done, but I've always wanted something a little better, and I've never had the time to do it myself.
This is where Zmanda Recovery Manager for MySQL enters the picture. ZRM Enterprise edition was reviewed by Alolita Sharma in the September 2008 issue of Linux Journal, but I'm never very interested in Enterprise editions. They always have proprietary bits, and I've never trusted GUI tools as much as their command-line counterparts. Luckily, where there is an “enterprise” version there almost always is a “community” version lurking somewhere in the shadows.
Like many other community editions, Zmanda Recovery Manager for MySQL, Community Edition (let's just call it ZRM) lacks the “flashy bits”. Things like the graphical “console” application, Windows compatibility, 24x7 support and other high-profile features of its Enterprise sibling are missing in the Community version. But the essentials are there, and it has one big feature I like: it is fully open source (GPL v2). The key metric, however, is does it do what I need it to do?
To find out, I set up a small test environment (I didn't want to test on live, production databases) and gave it a spin. See the Setting Up a Test Environment sidebar for details on what I did prior to installing and testing ZRM.
Setting Up a Test Environment
For testing and evaluating ZRM, I set up three virtual servers running Ubuntu 10.04 LTS, installed MariaDB on them, and then downloaded the example “employees” test database from launchpad.net/test-db.
Installing MariaDB is easy on Debian, Ubuntu and CentOS because of some MariaDB repositories maintained by OurDelta.org. The site has instructions on how to configure your system to use the repositories. For Ubuntu 10.04, I did the following:
1. Added the following lines to /etc/apt/sources.list.d/mariadb.list:
# MariaDB OurDelta repository for Ubuntu 10.04 "Lucid Lynx" deb http://mirror.ourdelta.org/deb lucid mariadb-ourdelta deb-src http://mirror.ourdelta.org/deb lucid mariadb-ourdelta
2. Added the repository key to apt:
apt-key adv --recv-keys \ --keyserver keyserver.ubuntu.com 3EA4DFD8A29A9ED6
3. Updated apt and installed mariadb-server:
apt-get update apt-get install mariadb-server
Installing mariadb-server looks and acts just like installing mysql-server.
With the database server installed, I loaded the test database. To load the employees test database into MariaDB, I first downloaded and untarred it and then used the mysql command-line program to load it into MariaDB like so:
tar -jxvf employees_db-full-1.0.6.tar.bz2 cd employees_db/ mysql -u root -p -t < employees.sql
The employees test database uses a couple hundred megabytes of disk space. This is in line with the size of our “real” databases. But more important than the comparative size, the employees test database comes with a handy verification script that lets me test that the data is correct. Verifying the data is as simple as this:
mysql -u root -p -t < test_employees_sha.sql
With the test database servers set up, I then created a fourth virtual machine with a base Ubuntu Server install on it to act as my backup server. Now I was ready to test using ZRM for backup and recovery with the ability to verify that the recovery was successful.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Tech Tip: Really Simple HTTP Server with Python
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Returning Values from Bash Functions
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide