A Machine for Keeping Secrets?

[I can't begin to describe all the things Vinay Gupta does. Fortunately, he does, at http://re.silience.com. There his leadership in many involvements are on display, where you can treat yourself to many hours of productive reading, listening and viewing—many involving breeds of Linux. After getting a little hang time with Vinay in London recently, I invited him to treat us to a guest EOF on any topic of his choice. He took the bait, and here it is.—Doc Searls]

The Lesson of Ultra and Mincemeat

The most important thing that the British War Office learned about cryptography was how to keep a secret: Enigma was broken at Bletchley Park early enough in World War II to change the course of the war—and of history. Now here's the thing: only if the breakthrough (called Ultra, which gives you a sense of its importance) was secret could Enigma's compromise be used to defeat the Nazis. Breaking Enigma was literally the "zero-day" that brought down an empire. Zero-day is a bug known only to an attacker. Defenders (those creating/protecting the software) have never seen the exploit and are, therefore, largely powerless to respond until they have done analysis. The longer the zero-day is kept secret, and its use undiscovered, the longer it represents absolute power.

Like any modern zero-day sold on the black market, the Enigma compromise had value only if it remained secret. The stakes were higher, but the basic template of the game—secret compromise, secret exploitation, doom on discovery—continues to be one basic form of the computer security game to this day. The allies went to extraordinary lengths to conceal their compromise of the Enigma, including traps like Operation Mincemeat (planting false papers on a corpse masquerading as a drowned British military officer). The Snowden revelations and other work has revealed the degree to which this game continues, with many millions of taxpayer dollars being spent keeping illicit access to software compromises available to the NSA, GCHQ and all the rest. The first rule is not to reveal success in breaking your enemy's security by careless action; the compromise efforts that Snowden revealed had, after all, been running for many years before the public became aware of them.

Who Does Software Serve?

I would like to posit a fundamental problem in our attitude toward computer security. For a long time we basically have assumed that computers are tools much like any other. Pocket calculators and supercomputer clusters all share the same von Neumann architecture (another artifact of WWII). But the truth is that the computer also has been, from its very first real implementation, a machine for keeping and seeking secrets. This history applies not just to the Enigma machines that the British subverted to help defeat the Nazis, but also to IBM's Hollerith tabulators, used by the Nazis to identify Jews from census databases.

This is why the general utility model of computing we now use is notoriously difficult to secure. At a conceptual level, all programs are assumed to be direct representatives of the user (or superuser). This is fundamentally a mistake, a conceptual error that cannot be repaired by any number of additional layers piled on top of the fundamental error: software serves its authors, not its users. Richard M. Stallman, of course, understands this clearly but focuses mainly on freeing the source code, giving technical users control of their software. But beyond the now-rusty saw of "with enough eyes, all bugs are shallow", the security community as a whole has not gone back to basics and assigned the intentionality of software correctly: to its authors, rather than to its users. Once we admit that software works for those who wrote it, rather than the hapless ones running it, many of the problems of managing computer security get much clearer, if not easier! Furthermore, there is always the gremlin: discordia manifested as bugs. Software behaviors that no human intended are not only common, but ubiquitous. In these cases, software serves neither the user nor the author, but silently adds to the entropy of the universe all by itself.

Imagine if all the people that wrote the software you use every day were made visible. If you run a fully-free computer, right down to the BIOS, you would generally expect to see a group of people who are fully on your side. But then there is the router, and the firmware in your mouse and your telephone's baseband processor, and indeed the epic maze of software that powers the electrical grid to which your devices must connect, and so on. In truth, we do not like or trust many of the people writing the software on which our lives depend in so many ways. The fact that in the 21st century we still download and run programs that have arbitrary access to all of our personal files, data and often deep access to our operating systems is frankly madness. I'm not discussing sandboxing or virtual environments—these may be answers, but let us first clearly state the question: who does this machine serve?

The machine serves the authors of the software, not the person choosing to run it. If you have recently handed over permissions you were not entirely happy with while installing software on an Android phone, you have felt a sense of "No, I do not want you to do that—that's your desire, not mine!" Often we do not entirely trust those authors, their software or the hardware on which it runs. We literally cannot trust our possessions. Nobody wants to carry a snitch in their pocket, and yet we all do.

In an ideal world, all of our systems (and perhaps not only technological ones) would obey the Principle of Least Privilege. Rather than granting large, abstract powers to code (or other systems) and trusting there to be no bugs, we could grant powers in a more narrow way. Consider the all-too-typical "programs can see the entire filesystem" permission we grant to nearly all software dæmons: when something goes wrong, it results in disasters like Squid deleting your root filesystem when restarting. Why does Squid need the power to do that? Why even hold those keys?

So What Happens If We Choose Not to Trust Everybody?

There was a path not taken: capability-based operating systems. Capability-based operating systems really are machines for keeping secrets. They assume that all code is written by people we do not trust, and that the code may contain damaging bugs, if not outright assaults. "All code is untrusted code" creates a completely different role for the operating system in protecting users from the tools they themselves have downloaded. This is a realistic model of what software is like, an explicit model of distrust, unlike the vague trust we feel when installing software many other people are using, thinking "with enough eyes all bugs are shallow, so I'm sure this will be fine." That's not a great model of trust! Capability-based systems assume that all code may be evil, even code the user writes (bugs!), so it is, by default, untrusted in the most profound way.

A bare program can do nothing—no network, no filesystem access, nothing until it is granted permissions, and the operating system provides a smooth interface for an extremely granular approach to granting and managing these permissions. This is not like the Android model, where the application has access to high-level constructs like "your address book"; rather, this extends all the way from that level down to a low-level file-by-file access control model.

In an object capability model, a program cannot open a directory or search for files without a go-ahead from a user, although usually that go-ahead is implicit. For example, passing an open file handle as a command-line argument would grant the relevant program access to that file. A shell could manage those open file handles seamlessly for the user, opening files and passing their handles in a way that is seamless and transparent to the user. Without that permission, all attempts to access a file simply will be met by failure; as far as the software is concerned, that resource simply does not exist.

To get to this security position, one has to be very clear about the politics of software. Why was this code written? Who does it serve? Toward whose advantage does it work? Cui bono? Even if the only illicit advantage is a bug or two serving only the increase of entropy in the universe, we must admit that, when we get right down to it, if you did not write the software yourself, it's pretty much like giving somebody the keys to your house. But, it does not have to be this way.

This line of argument gives me an uneasy feeling every time I write it down using a modern Linux machine, knowing full well that every single thing I've used apt-get install to put on my computer could relaying my key presses, because once I install it, it acts as if it were me, whether I want that behavior or not, moment by moment.

The computer is a machine for keeping and seeking secrets.

Is There an Evolutionary Upgrade Path?

I'm not suggesting that we throw out everything that has been done and start again. My suspicion is that to a very substantial degree, with a concerted effort, ideas from the capability-based systems could be comprehensively re-integrated into Linux. Security-Enhanced Linux uses these terms, but without having the full object capability model available. Post-Snowden, now fully aware of how pervasive advanced persistent threat type attacks are on our machines, it seems like it should be possible to start reconsidering what we think we know about software and security for the new operating environment in which we find ourselves. But, can we work out from the long-established SELinux project to those goals?

This is not a straightforward proposition for two reasons: the current limitations of SELinux and the problem of who wrote SELinux.

SELinux currently builds on top of Linux's POSIX capabilities, which are a way of dividing up the power of root into a set of compartments, avoiding the use of setuid. This is important because, in the event of a privilege escalation bug, the illicitly gained privileges aren't the full power of root, but a constrained subset of those powers: notionally, under SELinux, breaking "sudo tail /log/stuff" won't give you access to install new software in the network stack or any other unrelated thing. You might be able to read what you should not, but you can't write to a damn thing. However, the POSIX capability model in SELinux is (confusingly) not the fully blown object capabilities model, because it does not allow for delegation and (as far as I can tell from the docs!) applies only to superuser privileges. It comes from a different theoretical base.

In a full-blown object capability system with delegation, like the research operating systems lineage of GNOSIS, KeyKos (used in production systems), EROS, CapROS and Coyotos, a program (let's say a ported version of GIMP) is run and is blind. It can't see the filesystem, the network stack or anything else; it exists in the void. A user opens a filesystem browser and passes a file to the program, and along for the ride go a necessary set of access keys that are passed invisibly by the operating system. These can be implemented as cryptographic tokens, a little like Kerberos, or as an operating-system-level grant of permissions. Now GIMP can see that file. It can pass the token to the operating system like a filename or handle, which then will open/close the file, and so on. Furthermore, however, when permitted, it can pass that token to another program. Want to run an external filter that only exists as a command-line utility? GIMP can pass that token over to an external utility; the authority to see the file is a transferable asset. And, this model extends across computers. A token for, say, Wi-Fi access can be passed from one machine to another as a delegated authority, and authorities can be chained and combined. Something can act on your behalf (it has the token) without being you as far as the software is concerned.

Say a printer requires network access from one user, and a file to print from another. Normally this is a little tricky. You usually wind up with one user e-mailing the file to another, because the printer expects to work for a single individual: authentication is authorization. In an object capabilities system, the printer (or device, or program) simply assembles capabilities until it has what it needs to do the job. This completely breaks the model in which people are (all too commonly) passing passwords, which have infinite power, to people that they actually want to do one specific job on a remote machine. The granularity of control is so much finer, and delegation fits our real-world security use cases so much better, than the password identity model. You may still use a password to log in, but after that, it's delegated capabilities to manage untrusted software (and untrusted people) all the way down. Doesn't that sound like a better way of doing business in our unsafe times?

Now for the second problem: who wrote SELinux?

NSA Security-Enhanced Linux is a set of patches to the Linux kernel and some utilities to incorporate a strong, flexible mandatory access control (MAC) architecture into the major subsystems of the kernel. It provides an enhanced mechanism to enforce the separation of information based on confidentiality and integrity requirements, which allows threats of tampering and bypassing of application security mechanisms to be addressed and enables the confinement of damage that can be caused by malicious or flawed applications. It includes a set of sample security policy configuration files designed to meet common, general-purpose security goals.

The NSA team behind SELinux released it under a FOSS license at year end 2000. Now we need to ask ourselves, what is it? We have strong reason to suspect from the Snowden documents that long-term attempts to compromise open and academic security work are part of the NSA's mandate—for example, subverting the National Institute for Standards and Technology cryptography credentialing process by introducing flawed algorithms and getting NIST to sign off on them as credible standards. And, as bitter experience with OpenSSL has shown us (Heartbleed) "with enough eyes, all bugs are shallow" in fact buys us very little security. OpenSSL was extremely under-funded ($2,000 per year!) until the Heartbleed bug brought the world's focus to the plight of OpenSSL's underpaid development team. GPG's development team has been similarly underfunded. This is not working.

So now we have to look at SELinux in much the same light as (sadly) the Tor project—FOSS security tools funded by deeply untrusted sources with a long history of coercive undermining of security, privacy and user control of their own computers. Do we have enough eyes to be able to trust the code under these circumstances? SELinux is one of only four systems that can provide this kind of control under Linux (the others being AppArmor, Smack and Tomoyo) using the same underlying POSIX capabilities. Are eyeballs the right metric? Is that enough eyeballs?

These ongoing questions cut to the heart of our security development processes. I hope in the next few years we find a good way of funding the necessary security work that we, and increasingly the entire world, depend on day-in, day out.

Enter Capsicum. Capsicum is a fairly serious push to integrate deeply a full implementation of capability-based security into FreeBSD. There is an ongoing effort to create Capsicum for Linux, and work is continuing. This seems like a sensible and obvious approach to providing users with an appropriate level of security for the post-Snowden environment we now know we operate in. Because any flawed piece of software assumes full permissions as the user or as the superuser, depending on whether it was a user agent like a browser or a dæmon that got compromised (roughly speaking), we have a challenge. Either perfectly tighten every bolt on an entire starship and miss not a single one, or install bulkheads and partition the space into safe areas, so that, if there is a compromise, it is not systemic.

Bolt-tightening approaches to security are certainly necessary, but I cannot see any way to offer users comprehensive privacy and security on devices that act as secure end points without capability-based operating system concepts coming to Linux in a big way, and right now, that appears to mean Capsicum is the only game in town. This is a matter of some urgency. End-point security weaknesses are really starting to have systemic effects. Let me explain.

I would be much more comfortable if I did not have to trust the thousands of apps on my laptop as much as I do today, and I have very specific reasons for my unease: private key management. Right now I work for Ethereum, a FOSS project producing software to build a global-distributed metacomputer that includes a blockchain database. It's a bit like bitcoin, but it uses the database to store executable software in the form of "contracts" (little scripts you trust to manage your assets).

I think Ethereum is pretty cool. We expect to see an awful lot of very interesting use cases for the platform. Many people may wind up deeply relying on services using that software. For example, comprehensive solutions to the increasing mess that is DNS and issuing SSL certificates could come out of a global-distributed database with scripting: register a domain on the blockchain and self-publish your certificates using the same keys you used to pay for the domain name registration. Simple. Namecoin already has given some sense of what is possible, and I have no doubt there is far more to come.

There is more at risk than individual users being compromised and having their contracts spoofed. In a distributed system, there is a monoculture risk. If we have individual users being hacked because their laptops slip a version behind bleeding-edge security patches, that's bad enough. We have all heard tales of enormous numbers of bitcoins evaporating into some thief's pockets. But if we have only three major operating systems, run by >99% of our users, consider the risk that a zero-day exploit could be used to compromise the entire network's integrity by attacking the underlying consensus algorithms. If enough computers on the network say 2+2 = 5, the nature of blockchains is that 2+2 not only equals 5, but it always will equal five.

Huge disruption to everyday life could result from an error like this if blockchain technology winds up being the solution to DNS and SSL namespace issues (a conclusion I consider likely and that I may write up in future for this journal). We could lose basic connectivity to a large part of the Internet in the event that the consensus protocols are attacked by compromised machines. If a zero-day was used to construct malware that abused or just published private keys, that also could have disastrous effects not only for individual users, but also for the decentralized databases as a whole. If blockchains turn out to be vital to the Internet of Things (IBM has an Ethereum-based project, ADEPT, looking at blockchains and the IoT), then even if the blockchain itself and our software are secure, we have hostages to fortune in the form of the PCs being used to manage the keys and the code on which all of this value and utility is stored.

There is an urgent problem that users are starting to store very real value on their machines, not simply in the form of indirect access to value via banking Web sites, but as direct access to their own private keys and a political role in the consensus algorithms on which the entire blockchain is formed. This is all getting a lot more systemic and potentially serious than having somebody read one's e-mail or private journal.

Right now the weakest link in the ongoing adoption of blockchain technology is operating system security. The fear that one will get hacked discourages many users from using software on their own machines to do their computation (to use the Stallman model). Instead, third-party Web sites operating theoretically more secure wallets are common—essentially people are storing their bitcoin and similar "in the cloud" because they do not trust their own PCs enough to store value in a decentralized fashion. This negates many of the decentralization benefits of blockchain-based approaches. This is clearly a massive problem when viewed from Stallman's usual mode of analysis.

Surely at this point in history it's time for us to return computing to its roots. The computer is a machine for keeping my secrets: my banking details, my cryptocurrency holdings, my private keys controlling software acting on my behalf on the blockchain, or in the Internet of things, or in virtual reality, or in any other setting. It's becoming increasingly necessary that users can actually store value on their own machines, and right now, playing whackamole with zero-day exploits is not a good enough security model for this revolution to continue. We have to return to the hard question of how do I stop other people from telling my computer what to do without first asking me?

Encryption without secure endpoints isn't going to help very much, and right now, operating system security is the weakest link. I look forward to your ideas about how we might address these issues in an ongoing fashion—both as a question of awareness raising and funding models, and for the long, hard quest for genuine security for average users. Ordinary people should be able to store value on their home computers without feeling that they have automatically left the front door open with the keys in the lock. How can we provide people with an equivalent level of protection for their bank accounts or their bitcoin holdings? This is the real challenge meeting cryptocurrencies, blockchains and even the Internet of Things. If we cannot trust the users' devices, how can we give them all this access to and power over users' lives?

The revolution is stalling for ordinary users because they cannot trust their operating systems to protect their private keys and thereby their accounts. What now?

Acknowledgements

I'd like to thank a few people for their input: Alan Karp of HP Labs, and Ben Laurie and David Drysdale of Google (and Capsicum).

And thanks to Doc too, for inviting me to do this.

Resources

British War Office: https://en.wikipedia.org/wiki/War_Office

Enigma: http://www.bbc.co.uk/history/topics/enigma

Bletchley Park: http://www.bletchleypark.org.uk/content/hist/worldwartwo/industrialisation.rhtm

Ultra: https://en.wikipedia.org/wiki/Ultra

"How Zero-Day Exploits Are Bought & Sold": http://null-byte.wonderhowto.com/inspiration/zero-day-exploits-are-bought-sold-0159611

Operation Mincemeat: https://en.wikipedia.org/wiki/Operation_Mincemeat

The Man Who Never Was (a film about Operation Mincemeat): https://en.wikipedia.org/wiki/The_Man_Who_Never_Was

"NSA purchased zero-day exploits from French security firm Vupen": http://www.zdnet.com/article/nsa-purchased-zero-day-exploits-from-french-security-firm-vupen

IBM and the Holocaust: https://en.wikipedia.org/wiki/IBM_and_the_Holocaust

Principle of Least Privilege: http://en.wikipedia.org/wiki/Principle_of_least_privilege

"restarting a testing build of squid results in deleting all files in a hard-drive": https://bugzilla.redhat.com/show_bug.cgi?id=1202858

Capability-Based Security: https://en.wikipedia.org/wiki/Capability-based_security

From Objects to Capabilities: Capability Operating Systems: http://erights.org/elib/capability/ode/ode-capabilities.html

Security-Enhanced Linux: https://en.wikipedia.org/wiki/Security-Enhanced_Linux

POSIX Capabilities: https://friedhoff.org/posixfilecaps.html

"Using POSIX capabilities in Linux, part one (avoiding the use of setuid)": http://archlinux.me/brain0/2009/07/28/using-posix-capabilities-in-linux-part-one

EROS (The Extremely Reliable Operating System): http://www.eros-os.org/eros.html

CapROS (The Capability-Based Reliable Operating System): http://www.capros.org

The Coyotos Secure Operating System: http://www.coyotos.org

"Explain Like I'm 5: Kerberos": http://www.roguelynn.com/words/explain-like-im-5-kerberos

Who Wrote SELinux?: https://www.nsa.gov/research/selinux

Patch: https://en.wikipedia.org/wiki/Patch_(computing)

Linux Kernel: https://en.wikipedia.org/wiki/Linux_kernel

Mandatory Access Control (MAC):

"Tech Titans Launch 'Core Infrastructure Initiative' to Secure Key Open Source Components": http://www.securityweek.com/tech-titans-launch-core-infrastructure-initiative-secure-key-open-source-components

Heartbleed:

"The Internet Is Being Protected by Two Guys Named Steve": http://www.buzzfeed.com/chrisstokelwalker/the-internet-is-being-protected-by-two-guys-named-st#.earzPzxNAB

"US government increases funding for Tor, giving $1.8m in 2013": http://www.theguardian.com/technology/2014/jul/29/us-government-funding-tor-18m-onion-router

Clipper Chip: https://en.wikipedia.org/wiki/Clipper_chip

Google Transparency Report: https://www.google.com/transparencyreport/userdatarequests/US

"Capsicum: practical capabilities for UNIX": https://lwn.net/Articles/482858

Capsicum for Linux: https://www.cl.cam.ac.uk/research/security/capsicum/linux.html

Linux Kernel with Capsicum Support: https://github.com/google/capsicum-linux

Ethereum: https://ethereum.org

Smart Contract: https://en.wikipedia.org/wiki/Smart_contract

Dapps for Beginners (Ethereum contract tutorials): https://dappsforbeginners.wordpress.com

Namecoin: https://namecoin.info

"A history of bitcoin hacks": http://www.theguardian.com/technology/2014/mar/18/history-of-bitcoin-hacks-alternative-currency

Device democracy—Saving the future of the Internet of Things: http://public.dhe.ibm.com/common/ssi/ecm/en/gbe03620usen/GBE03620USEN.PDF

Endpoint Security: http://searchmidmarketsecurity.techtarget.com/definition/endpoint-security

Load Disqus comments