A Machine for Keeping Secrets?

Is There an Evolutionary Upgrade Path?

I'm not suggesting that we throw out everything that has been done and start again. My suspicion is that to a very substantial degree, with a concerted effort, ideas from the capability-based systems could be comprehensively re-integrated into Linux. Security-Enhanced Linux uses these terms, but without having the full object capability model available. Post-Snowden, now fully aware of how pervasive advanced persistent threat type attacks are on our machines, it seems like it should be possible to start reconsidering what we think we know about software and security for the new operating environment in which we find ourselves. But, can we work out from the long-established SELinux project to those goals?

This is not a straightforward proposition for two reasons: the current limitations of SELinux and the problem of who wrote SELinux.

SELinux currently builds on top of Linux's POSIX capabilities, which are a way of dividing up the power of root into a set of compartments, avoiding the use of setuid. This is important because, in the event of a privilege escalation bug, the illicitly gained privileges aren't the full power of root, but a constrained subset of those powers: notionally, under SELinux, breaking "sudo tail /log/stuff" won't give you access to install new software in the network stack or any other unrelated thing. You might be able to read what you should not, but you can't write to a damn thing. However, the POSIX capability model in SELinux is (confusingly) not the fully blown object capabilities model, because it does not allow for delegation and (as far as I can tell from the docs!) applies only to superuser privileges. It comes from a different theoretical base.

In a full-blown object capability system with delegation, like the research operating systems lineage of GNOSIS, KeyKos (used in production systems), EROS, CapROS and Coyotos, a program (let's say a ported version of GIMP) is run and is blind. It can't see the filesystem, the network stack or anything else; it exists in the void. A user opens a filesystem browser and passes a file to the program, and along for the ride go a necessary set of access keys that are passed invisibly by the operating system. These can be implemented as cryptographic tokens, a little like Kerberos, or as an operating-system-level grant of permissions. Now GIMP can see that file. It can pass the token to the operating system like a filename or handle, which then will open/close the file, and so on. Furthermore, however, when permitted, it can pass that token to another program. Want to run an external filter that only exists as a command-line utility? GIMP can pass that token over to an external utility; the authority to see the file is a transferable asset. And, this model extends across computers. A token for, say, Wi-Fi access can be passed from one machine to another as a delegated authority, and authorities can be chained and combined. Something can act on your behalf (it has the token) without being you as far as the software is concerned.

Say a printer requires network access from one user, and a file to print from another. Normally this is a little tricky. You usually wind up with one user e-mailing the file to another, because the printer expects to work for a single individual: authentication is authorization. In an object capabilities system, the printer (or device, or program) simply assembles capabilities until it has what it needs to do the job. This completely breaks the model in which people are (all too commonly) passing passwords, which have infinite power, to people that they actually want to do one specific job on a remote machine. The granularity of control is so much finer, and delegation fits our real-world security use cases so much better, than the password identity model. You may still use a password to log in, but after that, it's delegated capabilities to manage untrusted software (and untrusted people) all the way down. Doesn't that sound like a better way of doing business in our unsafe times?

Now for the second problem: who wrote SELinux?

NSA Security-Enhanced Linux is a set of patches to the Linux kernel and some utilities to incorporate a strong, flexible mandatory access control (MAC) architecture into the major subsystems of the kernel. It provides an enhanced mechanism to enforce the separation of information based on confidentiality and integrity requirements, which allows threats of tampering and bypassing of application security mechanisms to be addressed and enables the confinement of damage that can be caused by malicious or flawed applications. It includes a set of sample security policy configuration files designed to meet common, general-purpose security goals.

The NSA team behind SELinux released it under a FOSS license at year end 2000. Now we need to ask ourselves, what is it? We have strong reason to suspect from the Snowden documents that long-term attempts to compromise open and academic security work are part of the NSA's mandate—for example, subverting the National Institute for Standards and Technology cryptography credentialing process by introducing flawed algorithms and getting NIST to sign off on them as credible standards. And, as bitter experience with OpenSSL has shown us (Heartbleed) "with enough eyes, all bugs are shallow" in fact buys us very little security. OpenSSL was extremely under-funded ($2,000 per year!) until the Heartbleed bug brought the world's focus to the plight of OpenSSL's underpaid development team. GPG's development team has been similarly underfunded. This is not working.

So now we have to look at SELinux in much the same light as (sadly) the Tor project—FOSS security tools funded by deeply untrusted sources with a long history of coercive undermining of security, privacy and user control of their own computers. Do we have enough eyes to be able to trust the code under these circumstances? SELinux is one of only four systems that can provide this kind of control under Linux (the others being AppArmor, Smack and Tomoyo) using the same underlying POSIX capabilities. Are eyeballs the right metric? Is that enough eyeballs?

These ongoing questions cut to the heart of our security development processes. I hope in the next few years we find a good way of funding the necessary security work that we, and increasingly the entire world, depend on day-in, day out.

Enter Capsicum. Capsicum is a fairly serious push to integrate deeply a full implementation of capability-based security into FreeBSD. There is an ongoing effort to create Capsicum for Linux, and work is continuing. This seems like a sensible and obvious approach to providing users with an appropriate level of security for the post-Snowden environment we now know we operate in. Because any flawed piece of software assumes full permissions as the user or as the superuser, depending on whether it was a user agent like a browser or a dæmon that got compromised (roughly speaking), we have a challenge. Either perfectly tighten every bolt on an entire starship and miss not a single one, or install bulkheads and partition the space into safe areas, so that, if there is a compromise, it is not systemic.

Bolt-tightening approaches to security are certainly necessary, but I cannot see any way to offer users comprehensive privacy and security on devices that act as secure end points without capability-based operating system concepts coming to Linux in a big way, and right now, that appears to mean Capsicum is the only game in town. This is a matter of some urgency. End-point security weaknesses are really starting to have systemic effects. Let me explain.

I would be much more comfortable if I did not have to trust the thousands of apps on my laptop as much as I do today, and I have very specific reasons for my unease: private key management. Right now I work for Ethereum, a FOSS project producing software to build a global-distributed metacomputer that includes a blockchain database. It's a bit like bitcoin, but it uses the database to store executable software in the form of "contracts" (little scripts you trust to manage your assets).

I think Ethereum is pretty cool. We expect to see an awful lot of very interesting use cases for the platform. Many people may wind up deeply relying on services using that software. For example, comprehensive solutions to the increasing mess that is DNS and issuing SSL certificates could come out of a global-distributed database with scripting: register a domain on the blockchain and self-publish your certificates using the same keys you used to pay for the domain name registration. Simple. Namecoin already has given some sense of what is possible, and I have no doubt there is far more to come.

There is more at risk than individual users being compromised and having their contracts spoofed. In a distributed system, there is a monoculture risk. If we have individual users being hacked because their laptops slip a version behind bleeding-edge security patches, that's bad enough. We have all heard tales of enormous numbers of bitcoins evaporating into some thief's pockets. But if we have only three major operating systems, run by >99% of our users, consider the risk that a zero-day exploit could be used to compromise the entire network's integrity by attacking the underlying consensus algorithms. If enough computers on the network say 2+2 = 5, the nature of blockchains is that 2+2 not only equals 5, but it always will equal five.

Huge disruption to everyday life could result from an error like this if blockchain technology winds up being the solution to DNS and SSL namespace issues (a conclusion I consider likely and that I may write up in future for this journal). We could lose basic connectivity to a large part of the Internet in the event that the consensus protocols are attacked by compromised machines. If a zero-day was used to construct malware that abused or just published private keys, that also could have disastrous effects not only for individual users, but also for the decentralized databases as a whole. If blockchains turn out to be vital to the Internet of Things (IBM has an Ethereum-based project, ADEPT, looking at blockchains and the IoT), then even if the blockchain itself and our software are secure, we have hostages to fortune in the form of the PCs being used to manage the keys and the code on which all of this value and utility is stored.

There is an urgent problem that users are starting to store very real value on their machines, not simply in the form of indirect access to value via banking Web sites, but as direct access to their own private keys and a political role in the consensus algorithms on which the entire blockchain is formed. This is all getting a lot more systemic and potentially serious than having somebody read one's e-mail or private journal.

Right now the weakest link in the ongoing adoption of blockchain technology is operating system security. The fear that one will get hacked discourages many users from using software on their own machines to do their computation (to use the Stallman model). Instead, third-party Web sites operating theoretically more secure wallets are common—essentially people are storing their bitcoin and similar "in the cloud" because they do not trust their own PCs enough to store value in a decentralized fashion. This negates many of the decentralization benefits of blockchain-based approaches. This is clearly a massive problem when viewed from Stallman's usual mode of analysis.

Surely at this point in history it's time for us to return computing to its roots. The computer is a machine for keeping my secrets: my banking details, my cryptocurrency holdings, my private keys controlling software acting on my behalf on the blockchain, or in the Internet of things, or in virtual reality, or in any other setting. It's becoming increasingly necessary that users can actually store value on their own machines, and right now, playing whackamole with zero-day exploits is not a good enough security model for this revolution to continue. We have to return to the hard question of how do I stop other people from telling my computer what to do without first asking me?

Encryption without secure endpoints isn't going to help very much, and right now, operating system security is the weakest link. I look forward to your ideas about how we might address these issues in an ongoing fashion—both as a question of awareness raising and funding models, and for the long, hard quest for genuine security for average users. Ordinary people should be able to store value on their home computers without feeling that they have automatically left the front door open with the keys in the lock. How can we provide people with an equivalent level of protection for their bank accounts or their bitcoin holdings? This is the real challenge meeting cryptocurrencies, blockchains and even the Internet of Things. If we cannot trust the users' devices, how can we give them all this access to and power over users' lives?

The revolution is stalling for ordinary users because they cannot trust their operating systems to protect their private keys and thereby their accounts. What now?


Vinay Gupta is the release coordinator for Ethereum, a FOSS scriptable blockchain platform.