Cooking with Linux - Mirror, Mirror, of It All
The -r indicates a recursive copy, and the -p tells scp to preserve modification times, ownership and permissions from the original files and directories. If you are transferring large amounts of data, you might consider using the -C option, which does compression on the fly. It can make a substantial difference in throughput.
Possibly the biggest problem with all these methods of mirroring data is it can take a great deal of time. wget will download new files from an FTP server, but there is no option to keep a directory entirely in sync by deleting files. Secure copy is nice, but it doesn't have any mechanism for transferring only files that are changed. That's the second downside. Making sure that the data stays in sync without transferring every single file and directory requires a program with a bit more finesse.
The best program I know for this is probably Andrew Tridgell's rsync. Linux Journal's own Mick Bauer did a superb job of covering this package in the March and April 2003 issues of this fine magazine, so I won't go over it again other than to say you might want to look up his two-parter on the subject.
In many cases, that leaves us with our old friend, FTP—well, sort of. On one side (the machine you want to mirror), you would use your FTP server, whether it was ProFTPD or wu-ftpd. On the other side, you would use Uwe Ohse's ftpcopy program. ftpcopy is a fast, easy-to-set-up and easy-to-use program that does a nice job of copying entire directory hierarchies. As it copies, it maintains permissions and modification dates and times, and it does it fast. Furthermore, it keeps track of files that already have been downloaded. This is handy because the next time you run ftpcopy, it transfers only those files that have been changed, thus making your backup even faster.
Some distributions come with ftpcopy, but for the latest version of ftpcopy, go to www.ohse.de/uwe/ftpcopy/ftpcopy.html to pick up the download. Building the package is easy and takes only a few steps:
tar -xzvf ftpcopy-0.6.2.tar.gz cd web/ftpcopy-0.6.2 make
In the directory called command, you'll find three binaries: ftpcopy, ftpcp and ftpls. You can run it from here or copy the three files to /usr/local/bin or somewhere else in your $PATH.
Here's how it works. Let's say I wanted to mirror or back up my home directory on a remote system. A basic ftpcopy command looks something like this:
ftpcopy -u marcel -p secr3t! \ remote.hostname /home/marcel /mirdir/
The -u and -p options are obviously for my user name and (fake) password on the remote system. What follows is the path to the directory you want to copy and then the local directory where this directory structure will be re-created. As the download progresses, you will see something like this:
/mirdir/scripts/backup.log: download successful /mirdir/scripts/checkhosts.pl: download successful /mirdir/scripts/ftplogin.msg: download successful /mirdir/scripts/gettime.pl: download successful
If you want a little more information on your download, add the --bps option. The results then report the rate of data transfer in bytes per second.
You should consider running ftpcopy with the --help option at least once, and you should be aware of some options. For instance, -s deals with symbolic links, and -l lets you increase the level of logging. If you want to set mirroring to run by means of a cron job, you might want to set logging to 0. Another useful option is -n. If a file is deleted on the remote side, it also will be deleted locally when you run ftpcopy. If you truly are trying to keep systems in sync, this is what you would want. To override this behavior, add -n and no deletes will occur.
Well, mes amis, the hour has arrived, and we must all go to our respective homes. Still, it is early enough for a final glass of wine, non? François, mon ami, if you will do the honors—in fact, make it two glasses, one to mirror the other, non? Until next time, mes amis, let us all drink to one another's health. A vôtre santé Bon appétit!!
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Stunnel Security for Oracle
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide