eCryptfs: a Stacked Cryptographic Filesystem
As with any filesystem, you should make regular backups of your data when using eCryptfs. This is done easily and securely by unmounting eCryptfs and reading the lower encrypted files.
eCryptfs protects only the confidentiality of data at rest that is outside the control of the trusted host environment. You should use access control mechanisms properly, such as SELinux on the trusted host in order to regulate access to the decrypted files.
eCryptfs will, by default, preserve all of the information necessary to access the decrypted contents of the files in the contents of the lower files themselves. All that is required is the key used to create the files in the first place. You should take measures to protect this key. If applications, such as incremental backup utilities, are configured to read only the lower encrypted files, these utilities do not need to apply any further encryption to the files in order to ensure data confidentiality.
If you are using a passphrase, follow common best practices in selecting and protecting your passphrase (for instance, see www.iusmentis.com/security/passphrasefaq). I recommend using the public key mode of operation instead of passphrase mode whenever possible. When using a public key module, make a backup copy of your key file and store it in a physically secure location. Should you lose your key, nobody will be able to retrieve your data. Do not store unprotected copies of your passphrase or your public key file on the same media as your encrypted data.
You are free to choose among the symmetric encryption ciphers that are available through the Linux kernel cryptographic API. eCryptfs recommends AES-128 as the default cipher. If you have hardware acceleration available on your machine, and if it is supported by the selected cipher in the kernel cryptographic API, eCryptfs encryption and decryption operations will be hardware-accelerated automatically.
You should take measures to ensure that sensitive data is not written to secondary storage in unencrypted form. Applications that write out sensitive temporary data should be configured so that they write only under an eCryptfs mountpoint. You also should use dm-crypt to encrypt the swap space with a random key. The details are beyond the scope of this article, but commands to set it up take the following form:
# cryptsetup -c aes-cbc-plain -d /dev/random create \ swap /dev/SWAPDEV # mkswap /dev/mapper/swap # swapon -p 1 /dev/mapper/swap
SWAPDEV is the swap block device on your machine (refer to your /etc/fstab file if you are not sure which device currently is used for swap). You can create simple boot scripts to set up the encrypted swap space automatically, run ecryptfsd and perform eCryptfs mounts. Consult your distribution's documentation for more details on writing boot scripts and using dm-crypt with a random key to encrypt your swap space.
Note that current releases of eCryptfs encrypt only the file contents. Metadata about the file—for instance, the size, the name, permissions and extended attributes—are all readable by anyone with access to the lower encrypted file. Future work on eCryptfs will include encryption or obfuscation of some of this metadata.
Using block device encryption together with eCryptfs can combine the security provided by both mechanisms while offering the flexibility of having seamless access to individual encrypted lower files, although this roughly doubles the processing overhead of encrypting and decrypting the data. If only the contents of the files on secondary storage require confidentiality, eCryptfs by itself is, in most cases, sufficient.
eCryptfs was designed to support a host of advanced key management and policy features. The development road map for eCryptfs includes multiple keys per file, different keys and ciphers for different files depending on the application creating the file and the location where the file is being written, integrity enforcement and more extensive interoperability with existing key infrastructures and key management devices. These features will become available as they are implemented in future versions of the Linux kernel.
eCryptfs is a flexible kernel-native solution that cryptographically enforces data confidentiality on secondary storage devices. eCryptfs can be deployed on existing filesystems with minimal effort. The individual encrypted files can be transferred to other hosts running eCryptfs and accessed transparently using the proper key. The eCryptfs key management mechanism is highly extensible. eCryptfs is suitable to use as a strong and convenient data-confidentiality enforcement component to help secure data managed in Linux environments.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- What's the tweeting protocol?
- Tech Tip: Really Simple HTTP Server with Python
- BASH script to log IPs on public web server
3 hours 45 min ago
7 hours 21 min ago
- Reply to comment | Linux Journal
7 hours 53 min ago
- All the articles you talked
10 hours 17 min ago
- All the articles you talked
10 hours 20 min ago
- All the articles you talked
10 hours 21 min ago
14 hours 46 min ago
- Keeping track of IP address
16 hours 37 min ago
- Roll your own dynamic dns
21 hours 51 min ago
- Please correct the URL for Salt Stack's web site
1 day 1 hour ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?