On-line Encrypted Backups for Your Laptop
tar xzvf rlog-1.3.7.tgz cd rlog-1.3.7 ./configure && make make install cd .. tar xzvf encfs-1.3.2-1.tgz cd encfs-1.3.2 ./configure && make make install
The first time you attempt to mount a raw filesystem to an encrypted filesystem, EncFS will ask you what level of cryptography you desire and what passphrase to use. The same command is used to create an encrypted filesystem and to mount one. Subsequent mounts of the raw filesystem with EncFS will prompt you only for the passphrase. Initial mounting and remounting of EncFS on a rawfs (backed at the time by sshfs) is shown here:
$ encfs ~/rawfs ~/backupfs Creating new encrypted volume. Please choose from one of the following options: enter "x" for expert configuration mode, enter "p" for pre-configured paranoia mode, anything else... will select standard mode. ?> Standard configuration selected. Configuration finished. The filesystem ... has the following properties: Filesystem cipher: "ssl/blowfish", version 2:1:1 Filename encoding: "nameio/block", version 3:0:1 Key Size: 160 bits Block Size: 512 bytes Each file contains 8 byte header with unique IV data Filenames encoded using IV chaining mode. Now you will need to enter a password ... You will need to remember this password, ... no recovery mechanism. However, the password can be changed later using encfsctl. New Encfs Password: Verify Encfs Password: $ date > backupfs/datetest.txt $ cat backupfs/datetest.txt Fri Aug 24 20:44:33 EDT 2007 $ ls -l rawfs total 4 -rw-rw---- 1 ben 505 37 2007-08-24 06:27 K9dmA... $ fusermount -u backupfs $ encfs ~/rawfs ~/backupfs EncFS Password: $ ls -l ~/backupfs -rw-rw---- 1 ben 505 29 2007-08-24 06:27 datetest.txt
We now have a ~/backupfs filesystem that encrypts anything written to it and stores it on an on-line storage system somewhere. A great tool for keeping your on-line backup up to date is rsync(1).
The rsync manual page states: “The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network connection.”
In our case, both the data to be backed up and the place to which we are backing up appear through the Linux kernel. Because ~/backupfs needs to read and write to the Internet, we very much would like to limit the amount of data that is written to it.
Some differences between a normal Linux kernel filesystem like ext3 and our layered setup might have to be worked around with command-line options to rsync. Listing 3 shows an rsync on an EncFS, which is using sshfs to provide the on-line storage. The first time rsync is run, the whole file is uploaded to the on-line storage. The second time, only some metadata is sent and received.
The -a option to rsync is similar to the -a option to the cp command; it attempts to preserve everything in the source filesystem at the destination. The --no-g command-line option to rsync tells it not to try to sync the destination file's group to the source file's group. In this case, the sshfs does not allow me to change the group of the destination file, so rsync would generate a warning when it failed to set the remote file's group. The --delete-after cleans up any files that exist only in the on-line storage filesystem. In this case, I also use --include to sync only the plain-text files. This can be quite handy for keeping backups of only OpenOffice.org documents in a larger filesystem.
Listing 3. Using rsync to Back Up Data to an Encrypted On-line Filesystem
$ rsync -av --delete-after \ --include="*.txt" --no-g \ small/ ~/backupfs ... boysw10.txt sent 49056 bytes received 48 bytes total size is 48923 $ rsync -av ... sent 83 bytes received 26 bytes total size is 48923
Another rsync option that can be invaluable is --modify-window=n, where the parameter n is the number of seconds that the two timestamps can differ between the local and remote files and still be considered the same. When using a filesystem showing on-line storage, the modification time might range from not being perfectly accurate to being a few days off. Setting the --modify-window correctly can hide these slight timestamp drifts or large fixed timestamp offsets and allow rsync to continue to work efficiently.
Running EncFS on top of OmniFS requires some special parameters when first mounting the EncFS. The main issue I found with using the default settings for EncFS was that file contents, when read back, would sometimes have trailing garbage. When using OmniFS and first creating the EncFS, choose expert mode, cipher=AES, keysize=256, blocksize=4096, filename encoding=Stream, filename IV chaining=Yes, per-file IV=no and block authentication code headers=no. The main issues seem to stem from the per file IV settings and something going missing with the round-trip latency of OmniFS. Listing 4 shows some combinations of expert mode settings to EncFS when using OmniFS as the base filesystem and the resulting filesystem interaction.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide