BRU—Backup & Restore Utility
In many ways, I am a “typical” user. Backing up is a pain. Necessary, but still a pain. I'm also used to getting burned by bad tapes and utilities that just don't seem to be very robust (such as tar, or a few other commercial items that will remain unnamed).
I saw the ad for BRU on page 39 of the January `95 Linux Journal. I had seen it in other magazines, and heard it once highly recommended by a former business associate. With this as a background, and having a spare $97, I decided it was worth a try. Besides, they offer a 60 day risk-free guarantee. I faxed them an order, not realizing how soon I was going to need BRU.
Ted Cook called me the next day (my fax went out late in the evening). He asked what my kernel version was, if I was running Slackware and had pkgtool, if I wanted the pkgtool version or the tar version, and what disk size I needed. I opted for the pkgtool version on 3-1/2" disks. BRU, along with a nifty mug (which you can keep even if you decide not to keep BRU) arrived 2 days later.
The package comes on a single 1.44MB floppy with a nicely done spiral-bound manual, plus an addendum sheet outlining the install process for Linux. Installation using pkgtool is quick and painless.
Once installed, you must edit /etc/brutab to define your backup devices to BRU. The file is well commented, and the process is outlined in detail in the manual. I did this, defining my Tandberg 3600 drive. There is also a file /etc/bruxpat that contains patterns of files to be excluded from backups, such as /tmp/*, /proc/*, etc., as well as what files should not be compressed if you are using BRU's built in compression, such as .Z or .gz files. The use of this file is optional.
Here in /etc/brutab I found what I consider to be a flaw with the way BRU is shipped. There is an entry for OVERWRITE PROTECT, which is turned on, but it relies on the value of RECYCLEDAYS, which is set to zero, effectively disabling the protection afforded. As I will relate, this turned into a painful “gotcha” for me. Having plenty of tapes and a fairly regular backup schedule, I set RECYCLEDAYS to 7. There are many other options that can be set in /etc/brutab, most of which can be left alone, or omitted for default values.
I suppose the best way to test a backup product is to backup a system, and then wipe it clean. This is not what I intended to do, but it is effectively what I ended up doing. I ran a backup using BRU the day I received it. Three days later, I ran another backup, went to work, and came home to a failed hard drive. Ugh! Thanks to a good friend, I was able to get a loaner drive the same evening. I booted with my Slackware disks (1.1.2—old, I know, but that's what I'm running...), partitioned and formatted the new drive, and installed only the required packages from the A disk set. I then installed BRU, edited /etc/brutab to define my tape drive, loaded up my tape, and started the restore—or so I thought. What actually happened is that my fingers got dyslexic on me, and instead of telling BRU to extract from the tape, I told it to backup to the tape... This is where the default setting of RECYCLEDAYS=0 got me. Had it been anything else, or had I remembered to change it back to 7, I would not have overwritten my latest backup tape. (This should no longer be an issue, since EST, Inc. has changed the installation script to update these variables automatically during the installation, as it automatically creates /etc/brutab according to the installer's preferences.)
After thoroughly cussing myself out, kicking the wall, and muttering into thin air for a while, I changed
RECYCLEDAYS to 7, write protected the first tape I had made three days prior, and did the restore. Once complete, I did a reboot, and the system came up perfectly.
I then decided to test BRU's claims of reliability. Sitting back in the corner I have this tape with BAD written all over it. It first failed during a server backup at work (a whole different story about commercial software that doesn't work), so I brought it home, where it worked for a while. Very soon, this tape started always giving me errors. It would almost always appear to write properly, but would fail very shortly into any read operation with media I/O errors. I popped this tape into the drive and changed to /usr/bin and did a backup (BRU stores absolute pathnames only if you explicitly tell it to, otherwise stores everything relative to ./).
BRU complained again during the “AUTOSCAN” pass.
I created a junk directory, changed to there, and did a restore.
BRU warned me about my junk media.
BRU restored every single file on the tape.
I don't recommend using bad media for backups, but BRU did prove to me that it really does have the “GUTS” it talks about in the advertisements.
Since then, I've installed an Exabyte 8200 8mm tape drive, and do almost all of my backups there. With “out of the box” tuning as far as buffers go, I get about 240Kbs throughput writing to the tape. The AUTOSCAN feature is very nice, because it will warn you about media errors before you put your tape on the shelf thinking your data is secure. BRU also includes scripts for doing full and incremental (with up to 9 levels) backups. There are no menus—everything is driven from the command line. Hey—I'm not running Windoze here... My backup regime now consists of
cd /;bru -cvvvXf /dev/rmt1
Twenty minutes or so later, I come back and check, confident that AUTOSCAN will warn me of any problems encountered.
BRU has many, many options, most of which I have not even begun to look at. I like it. It's reliable. It fills a definite need. If you'd like more information, call Ted Cook at Enhanced Software Technologies, Inc., (800) 998-8649 or (602) 820-0042. Tell him I sent you.
About system: 80486DX/33, 20MB RAM, 1.2GB SCSI Disk, Tandberg 3600 and external Exabyte 8200 tape drives, and Adaptec 1542B SCSI Host adapter. Linux: Slackware 1.1.2 (highly modified) with kernel 1.1.45
Jon Freivald (firstname.lastname@example.org) is a Small Computer System Specialist for the US Marine Corps, currently stationed in Garden City, New York. He manages a Wide Area Network running Banyan VINES covering the NorthEastern eight states. He has been running Linux at home for over two years.
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
|Android's Limits||Jun 04, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- Senior Perl Developer
- Technical Support Rep
- Weechat, Irssi's Little Brother
- UX Designer
- One Tail Just Isn't Enough
- Android's Limits
- Reply to comment | Linux Journal
41 min 52 sec ago
- Reply to comment | Linux Journal
42 min 19 sec ago
- Replica Watches
3 hours 7 min ago
- Reply to comment | Linux Journal
7 hours 17 min ago
- on the path to understanding
7 hours 21 min ago
- As a fisher,we know that a
1 day 2 hours ago
- All I Say Is Worth Share!
1 day 3 hours ago
1 day 4 hours ago
1 day 7 hours ago
- You should consider visiting
1 day 8 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?