Encrypt Your Root Filesystem
The value /pci@f2000000/usb@1b,1/disk@1:2 comes from our earlier inspection of the Open Firmware device tree, and /pci@f2000000/usb@1b,1/disk@1 is the first disk on the USB bus on the PCI bus at f2000000. The device we are interested in is a disk, and :2 means partition 2.
4) Install the bootstrap programs and kernel to /dev/sda2:
# ybin --config /mnt/encroot/etc/yaboot.conf -v # mount /dev/sda2 /media/usbstick # cp /boot/vmlinux /media/usbstick
At this point, the crypto-aware initrd must be installed onto the Flash disk. Fedora provides a tool named mkinitrd that can create an initrd. However, at the time this article was written, mkinitrd did not know how to mount an encrypted root. The patch at https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=124789 provides this functionality. Once the patch is applied, mkinitrd reads /etc/crypttab and creates an appropriate initrd:
1. mkinitrd --authtype=paranoid -f /media/usbdisk/initrd.gz <kernel version> 2. umount /media/usbstick
The file /mnt/encroot/etc/fstab should be updated to reflect the changes made:
/dev/mapper/root / ext3 defaults 1 1
Encrypted swap or the absence of swap space entirely is a prerequisite for an encrypted filesystem. Reasons for this can be found in “Implementing Encrypted Home Directories” and in a BugTraq mailing-list thread titled “Mac OS X stores login/Keychain/FileVault passwords on disk”. When the patch at https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=127378 is applied to the initscripts package, Fedora allows users to encrypt their swap partitions using a randomly generated session key. Because swap space isn't generally required to be consistent across reboots, the session key is not saved when the system is powered down. To enable encrypted swap, complete the following steps:
1) Add the following line to /mnt/encroot/etc/fstab, replacing any previous swap record:
/dev/mapper/swap swap swap defaults 0 0
2) Add the following line to /mnt/encroot/etc/crypttab to tell the system how to perform the encryption:
swap /dev/hda3 /dev/urandom swap
At this point we should be able to reboot the system and use our encrypted filesystem. Again, we need to hold down option-command-o-f to enter the Open Firmware prompt.
As demonstrated above, the path to the Flash drive's second partition is /pci@f2000000/usb@1b,1/disk@1:2. Knowing this, we can build the path /pci@f2000000/usb@1b,1/disk@1:2,\ofboot.b. The , deliminates between the partition number and the filesystem path; \ofboot.b is the filesystem path, and \ is like UNIX's / with the filesystem root at the device's root:
> dir /pci@f2000000/usb@1b,1/disk@1:2,\ Untitled GMT File/Dir Size/ date time TYPE Name bytes 9/ 3/ 4 21:44:41 ???? ???? initrd.gz 2212815 8/28/ 4 12:24:21 tbxi UNIX ofboot.b 3060 9/ 3/ 4 2:21:20 ???? ???? vmlinux 141868 9/28/ 4 12:24:22 boot UNIX yaboot 914 9/28/ 4 12:24:22 conf UNIX yaboot.conf
This confirms that Open Firmware can read the files required to boot the system. Setting the value of the boot-device variable to /pci@f2000000/usb@1b,1/disk@1:2,\ofboot.b causes the system to boot from the Flash disk: setenv boot-device /pci@f2000000/usb@1b,1/disk@1:2,\ofboot.b.
Once the system successfully boots from the encrypted root, it is necessary to destroy all of the data on /dev/hda5. This can be done with the same procedure used to randomize the root filesystem's partition: dd if=/dev/urandom of=/dev/hda5. You may want to perform this overwrite several times. For one standard on sanitizing disks, see Chapter 8 of the US Department of Defense's “National Industrial Security Program Operating Manual”.
Following a safe sanitization, /dev/hda5 may be used as /home. The /home filesystem also should be encrypted. Luckily, this is a much simpler process, because the system need not boot off of /home. Creating the filesystem itself is similar to the steps taken to create the root filesystem.
1) Ensure that the aes, dm-mod and dm-crypt modules have been loaded into the kernel.
2) Unmount the partition that will host the encrypted home filesystem, /dev/hda5, from /home:
# umount /dev/hda5
3) Create a random 256-bit encryption key, and store it at /etc/home-key. One way to do this is:
# dd if=/dev/urandom of=/etc/home-key bs=1c count=32
4) Create a dm-crypt device, encrypted using the key you just generated:
# cryptsetup -d /etc/home-key create home /dev/hda5
5) Create an ext3 filesystem on /dev/mapper/home:
# mkfs.ext3 /dev/mapper/home
6) Mount the new filesystem:
# mount /dev/mapper/home /home
7) Create an entry in /etc/crypttab, so that various utilities know how the filesystem was configured:
root /dev/hda5 /etc/home-key cipher=aes
8) Finally, update /etc/fstab to contain an entry for /home:
/dev/mapper/home /home ext3 defaults 1 2
At this point, it is appropriate to begin adding nonroot local user accounts to the system. Setting up the encrypted root filesystem is now complete.
Having all of your data encrypted can be dangerous. If the encryption key is lost, your data is lost. Because of this, it is important to make backup copies of the Flash disk containing your key. It also is crucial to perform plain-text backups of the encrypted data. If you maintain a bootable rescue disk, it may make sense to rethink the system components that should be on it. A copy of your root and home filesystem keys, parted, hfsutils, the cryptography-related kernel modules and cryptsetup are excellent candidates.
How effective is this technique in protecting your data? In his book, Secrets and Lies, Bruce Schneier presents a technique that is useful in evaluating this. An attack tree can be used to model threats. Figure 4 presents the beginning of an attack tree for our encrypted filesystem. It is important to note that this attack tree is not complete and probably never will be.
By using the techniques in this article and a little creative thinking, it is possible to make the data on your hard disk more resistant to certain types of theft. It is important to keep in mind the types of attacks that circumvent these defensive techniques. Though other techniques must be used to protect against network-based and other attacks, those described here are a powerful tool toward the goal of overall system security.
Resources for this article: /article/7865.
Mike Petullo currently is working at WMS Gaming as a test engineer. He has been tinkering with Linux since 1997 and welcomes your comments at email@example.com.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- RSS Feeds
- It is quiet helping
2 hours 18 min ago
2 hours 35 min ago
- Reachli - Amplifying your
3 hours 52 min ago
4 hours 40 min ago
- good point!
4 hours 43 min ago
- Varnish works!
4 hours 52 min ago
- Reply to comment | Linux Journal
5 hours 22 min ago
- Reply to comment | Linux Journal
7 hours 48 min ago
- Reply to comment | Linux Journal
11 hours 48 min ago
- Yeah, user namespaces are
13 hours 4 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?