Paranoid Penguin - Customizing Linux Live CDs, Part III
The past couple months, I've been showing how to create your very own customized Ubuntu live CD. In “Customizing Linux Live CDs, Part I” (LJ, May 2008), I provided a basic procedure for mounting an Ubuntu Desktop 7.10 ISO image; removing, adding and updating its software packages; and repacking it into a new ISO image.
In Part II (LJ, June 2008), I showed how to create an encrypted virtual disk volume using TrueCrypt and explained how to use it in conjunction with your customized live CD—for example, by mounting it over the live CD default user's Documents folder.
This month, I wrap up the endeavor with some odds and ends, including my thoughts on network dæmons and firewall scripts, on plausible deniability scenarios and why you probably don't need to bother trying to enable user logins with your live CD.
As I was wrapping up Part II last month, I mentioned that the default account on Ubuntu live CDs, ubuntu, has no password. And, I implied that next time we might talk about “fixing” that.
At least, that's what was lurking at the back of my mind when I wrote the article. Why not, I wondered, set a password for the ubuntu account on the live CD, and configure GDM to start with a logon prompt?
But, the more I think about this, the less I think it's worth the effort. Let me take a few minutes to discuss why that may be.
Security is all about risk management. What controls can be employed to reduce or eliminate the risk of some bad thing happening? Is that risk likely enough to be worth the trouble of the controls? Does the control itself add other risks?
We set up accounts with passwords in order to mitigate the risk that some unauthorized person may gain access to system resources or data. On a system that has multiple users, or that is reachable over networks (especially if it's always connected), this is a serious risk.
But on an ephemeral system, such as a live CD with no hard drive of its own, there are better ways to protect access and data. Access is controlled easily by setting up your live CD so that when booted, it doesn't run any network services to which unauthorized users can connect. You can protect your personal data by keeping all of it elsewhere—for example, on a TrueCrypt-encrypted volume on a USB drive, which I showed you how to set up last month.
The sad fact of the matter is that anybody with physical access to your live CD in any form (burned onto a CD, or stored as an ISO image) can simply recustomize it using the same procedure you used and delete the password field in your custom user account's entry in /etc/passwd. Or, more likely, the attacker can skip customizing it all together and simply mount and copy the interesting parts of your squashfs image. No boot, no login!
This is the same reason that with “normal” Linux systems (hard drive or Flash-based systems) physical access is so important. Unless you're using encrypted system volumes, anybody with physical access to your Linux computer can reboot from a live CD, mount your hard drive, and copy and alter system files at will.
So again, the best way to protect the data you use with your live CD is to store it on an encrypted volume—either one small enough to fit on your live CD image (assuming you can live with read-only access to that data), or one stored on a USB drive. And, the best way to control access to your live CD system is not to run any network services.
The good news here is that by default, on Ubuntu Desktop 7.10, there are only two network dæmons that run by default: the CUPS printing system and the Avahi dæmon, which is part of the Zeroconf system for automated file/music sharing and Voice-over-IP client discovery. And, of these two things, only Avahi is problematic, because CUPS listens only on the local loopback interface—by default, CUPS doesn't accept connections from nonlocal processes.
How “problematic” is Avahi? Actually, not necessarily very much so at all. Truth is, I'm not aware of any critical security vulnerabilities in Avahi. However, it is the only thing standing between you and a system that accepts no foreign connections! If you disable Avahi, your system will be completely unresponsive to port scans and security scans. If a house with locked doors is secure, a house with no doors at all is extremely secure.
Disabling Avahi is a very simple step to add to the process of customizing an Ubuntu live CD (see the Appendix for the commands described in Part I of this series). Once you've mounted your ISO, mounted your squashfs image, and chrooted yourself into your live CD image's root filesystem (steps 00 through 12 in the Appendix), you need to issue only one command:
12.5-# update-rc.d -f avahi-daemon remove
You could, of course, also run a personal firewall script to be extra safe. But in this context (bootable-CD-based desktop), I'm not convinced it's worth the trouble, if it's possible to run without network dæmons in the first place. First, you can't necessarily be sure what your local IP address and Ethernet interface names will be, if you're going to run your live CD from random hardware, such as coffee-shop workstations. This makes it difficult to write things like anti-IP-spoofing rules.
Second, neither Ubuntu nor Debian (on which it's based) has a native firewall script service. If they did, you simply could add your firewall rules to an existing script somewhere in /etc, as with RHEL and SUSE. Instead, with Debian and Ubuntu, you either need to create your own startup script or install additional software like Firestarter on your live CD image, and configure that software on some other system the way you want it on the live CD, and copy the resulting configuration file(s) over to your live CD's filesystem.
Again, in this context, going network-dæmon-free is much simpler. Note, however, that this is one of very, very few situations in which I recommend against using iptables for local protection. Ordinarily, that is an important protection!
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Dynamic DNS—an Object Lesson in Problem Solving
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Please correct the URL for Salt Stack's web site
2 hours 46 min ago
- Android is Linux -- why no better inter-operation
5 hours 2 min ago
- Connecting Android device to desktop Linux via USB
5 hours 30 min ago
- Find new cell phone and tablet pc
6 hours 28 min ago
7 hours 57 min ago
- Automatically updating Guest Additions
9 hours 6 min ago
- I like your topic on android
9 hours 52 min ago
- This is the easiest tutorial
16 hours 28 min ago
- Ahh, the Koolaid.
22 hours 6 min ago
- git-annex assistant
1 day 4 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?