EOF - Give TCPA an Owner Override
The Trusted Computing Platform Alliance (TCPA) overview in Linux Journal's August 2003 issue, “Take Control of TCPA” written by Dave Safford, Jeff Kravitz and Leendert van Doorn, discusses some of the TCPA system's security features. The article does not consider, however, that the remote attestation feature of TCPA is likely to cause harm unless a simple fix is applied.
TCPA's basic rationale is correct: today's PC architecture lacks hardware features that would improve computer security. For instance, current PCs have no secure key storage mechanism. If you keep your GPG key on your computer, as most GPG users do, someone who breaks in is able to read it from disk and copy it.
The PC also is vulnerable to keyloggers and screen-grabbers. An attacker can watch you by installing code to record what you type and can capture what appears on your monitor.
TCPA improves computer security by addressing these weaknesses with corresponding hardware improvements. One of these, the secure key storage mechanism described in “Take Control of TCPA”, lets you prevent keys from being stolen, even in the case of a root compromise.
The Trusted Computing Group (TCG) also may develop hardware to defend against software keyloggers and screen-grabbers and to support stronger memory isolation than what exists in most popular operating systems today. Even if your system is compromised, your data and communications still could be protected. This approach would help programmers limit the privilege of software, even operating systems—that's good security design.
But trusted computing architects have gone astray in designing “system software integrity measurement”, which Safford et al. note “can be used to detect software compromise”. The TCPA software attestation mechanisms go beyond this; they're built to enforce policies even against the wishes of the computer owner.
Even as the attestation features prove to you that your computer's software is unaltered, they also can be used to prove to a third party that your computer's software is unaltered. If you, as the computer owner, make a change, a third party sees that the content of your attestation is different. That's fine if the third party has your interests at heart, but computer owners and third parties with whom they interact often have different interests.
Creating a reliable way for a third party to determine what software you're using is a pernicious project. Today, it's trivial to fool “Internet Explorer only” sites: change how your browser identifies itself, and there is nothing the other end can do. With TCPA's remote attestation, a site that insists on an attestation would receive the whole truth or a highly suspicious “No Comment”.
Another casualty might be the software interoperability that we currently take for granted. For example, today you can access Microsoft file servers with a Samba client. With a software attestation scheme that treats you as an adversary, software publishers and service providers would have a remarkably sophisticated toolset for locking out competition, forcing upgrades or downgrades and deliberately breaking interoperability. Software publishers could control their customers' backups or migration paths or build monopoly power over after-market products and services. That some will do so seems beyond dispute.
In the past, anticonsumer misfeatures and barriers to interoperability were remedied by reverse engineering. But the TCPA security design allows software publishers to hamper this practice severely. They can create encrypted network protocols and file formats—with keys tucked away in hardware rather than accessible in memory. Thus, the same security features that protect your computer and its software against intruders can be turned against you.
Fortunately, this problem is fixable. TCG should empower computer owners to override attestations deliberately to defeat policies of which they disapprove. Giving the owner this choice preserves an essential part of the status quo: third parties can never know for sure what's running on your PC. TCG already defines a platform owner concept. The TCG specification also should provide for a facility by which the platform owner, when physically present, can force the TPM chip to generate an attestation as if the Platform Configuration Registers (PCRs) contained values of the owner's choice instead of their actual values.
APIs and a clear user interface for the override mechanism could be specified by an appropriate TCG committee. Only the platform owner should be able to do this; whenever a machine provides an inaccurate attestation, it does so for what its owner considered an appropriate reason. This change would do nothing to undermine the basic security benefits of the TCPA hardware, including those outlined in the Safford article; you still could tell whether your computer had been altered.
EFF (www.eff.org) has been in regular contact with the TCG organization and its members. So far, these companies have resisted the Owner Override idea. We can use your help in the future to get them to fix trusted computing so that it benefits computer owners.
Seth David Schoen created the position of EFF Staff Technologist to help other technologists understand the civil liberties implications of their work, EFF staff better understand the underlying technology related to EFF's legal work and the public understand what the technology products they use really do.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
9 hours 51 min ago
- BASH script to log IPs on public web server
14 hours 18 min ago
17 hours 54 min ago
- Reply to comment | Linux Journal
18 hours 26 min ago
- All the articles you talked
20 hours 50 min ago
- All the articles you talked
20 hours 53 min ago
- All the articles you talked
20 hours 54 min ago
1 day 1 hour ago
- Keeping track of IP address
1 day 3 hours ago
- Roll your own dynamic dns
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?