Backups to the Future: Eliminate Tape Backups with FreeNAS and Bacula
Address = localhost
You also could search for the client.example.com and storage.example.com entries to find some of the other entries that need to be changed. Once the passwords and Address fields have been set, open the /etc/bacula/bacula-sd.conf file in your editor, and comment the following line in the Device section of the Filestorage device:
Archive Device = /tmp
Then, add the line below in its place to associate the locally mounted FreeNAS partition with the storage dæmon so you can back up to it:
Archive Device = /mnt/freenas
The final step is to open the Services utility under System→Administration, and check the box to set bacula-dir, bacula-sd and bacula-fd to start on runlevel 5 (Figure 5). You now can use the syntax:
service bacula-dir|sd|fd start|stop|restart
to control the dæmons. On other distributions, you can start the dæmons directly from /usr/sbin and use chkconfig to set the runlevel.
Running a backup is quite simple, as you already have done most of the work by editing the bacula-dir.conf file. Start the Bacula console from the Applications→System Tools Menu (Figure 6) in GNOME. You may need to edit the launcher, as I did, to point it to the correct /etc/bacula/gnome-console.conf file. Start the Tray Monitor utility from the System Tools menu as well. The Tray Monitor (Figure 7) is nice, because it gives you a quick glance at the status of the dæmons and any running jobs. This is helpful when you are multitasking or have jobs that run nightly and you want to check their status the next morning. Return to the console, and click the Run button to bring up the backup job dialog window. Under job, select WeeklyHomeBackups (Figure 8). This pre-fills the field selections with the items specified in your .conf file. You could change any of these options at this point, but they must first exist in the .conf file or they will not appear in the fields. In other words, you can't create a job from the drop-downs without populating the Job section of the .conf file.
Up to this point, there are no volumes, which as previously mentioned, need to exist before you can run a backup. Typically, you would have to use the label command from the console's command line to create a volume in a pool manually, but because of our settings, the system will create them automatically, auto-name them and recycle them when the volume retention period triggers. I like this better than manually creating the volumes, as you are less likely to encounter naming errors. Click OK to run the job, and view the results in the console.
If you were to change the Volume Retention setting on the same pool, restart the dæmons and run the job again, you would see the system auto-recycle a volume in the pool for the next job. Otherwise, it will prompt you to create a new volume, as no existing volumes can be recycled due to retention settings. You can run these jobs manually as often as you want, but they also will run according to the schedule defined in the bacula-dir.conf file.
Restoring a file in Bacula also is remarkably simple. You can use either the Restore button on the console toolbar or the restore command. Both are easy to use, but the restore command provides more options. To keep it simple, let's use the Restore button. When the dialog opens, select a job, client, pool and so on from which to restore (Figure 9), then click Select Files to mark the files/folders you want to restore (Figure 10). Before the restore job runs, you will be prompted to confirm your options, at which point you could type yes, mod or no. Typing mod provides more options over the job, including the option to restore to a different path from the original one.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
- Weechat, Irssi's Little Brother
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Poul-Henning Kamp: welcome to
1 hour 37 min ago
- This has already been done
1 hour 38 min ago
- Reply to comment | Linux Journal
2 hours 23 min ago
- Welcome to 1998
3 hours 12 min ago
- notifier shortcomings
3 hours 35 min ago
5 hours 12 min ago
- Android User
5 hours 14 min ago
- Reply to comment | Linux Journal
7 hours 7 min ago
9 hours 56 min ago
- This is a good post. This
15 hours 9 min ago