A Machine for Keeping Secrets?

[I can't begin to describe all the things Vinay Gupta does. Fortunately, he does, at http://re.silience.com. There his leadership in many involvements are on display, where you can treat yourself to many hours of productive reading, listening and viewing—many involving breeds of Linux. After getting a little hang time with Vinay in London recently, I invited him to treat us to a guest EOF on any topic of his choice. He took the bait, and here it is.—Doc Searls]

The Lesson of Ultra and Mincemeat

The most important thing that the British War Office learned about cryptography was how to keep a secret: Enigma was broken at Bletchley Park early enough in World War II to change the course of the war—and of history. Now here's the thing: only if the breakthrough (called Ultra, which gives you a sense of its importance) was secret could Enigma's compromise be used to defeat the Nazis. Breaking Enigma was literally the "zero-day" that brought down an empire. Zero-day is a bug known only to an attacker. Defenders (those creating/protecting the software) have never seen the exploit and are, therefore, largely powerless to respond until they have done analysis. The longer the zero-day is kept secret, and its use undiscovered, the longer it represents absolute power.

Like any modern zero-day sold on the black market, the Enigma compromise had value only if it remained secret. The stakes were higher, but the basic template of the game—secret compromise, secret exploitation, doom on discovery—continues to be one basic form of the computer security game to this day. The allies went to extraordinary lengths to conceal their compromise of the Enigma, including traps like Operation Mincemeat (planting false papers on a corpse masquerading as a drowned British military officer). The Snowden revelations and other work has revealed the degree to which this game continues, with many millions of taxpayer dollars being spent keeping illicit access to software compromises available to the NSA, GCHQ and all the rest. The first rule is not to reveal success in breaking your enemy's security by careless action; the compromise efforts that Snowden revealed had, after all, been running for many years before the public became aware of them.

Who Does Software Serve?

I would like to posit a fundamental problem in our attitude toward computer security. For a long time we basically have assumed that computers are tools much like any other. Pocket calculators and supercomputer clusters all share the same von Neumann architecture (another artifact of WWII). But the truth is that the computer also has been, from its very first real implementation, a machine for keeping and seeking secrets. This history applies not just to the Enigma machines that the British subverted to help defeat the Nazis, but also to IBM's Hollerith tabulators, used by the Nazis to identify Jews from census databases.

This is why the general utility model of computing we now use is notoriously difficult to secure. At a conceptual level, all programs are assumed to be direct representatives of the user (or superuser). This is fundamentally a mistake, a conceptual error that cannot be repaired by any number of additional layers piled on top of the fundamental error: software serves its authors, not its users. Richard M. Stallman, of course, understands this clearly but focuses mainly on freeing the source code, giving technical users control of their software. But beyond the now-rusty saw of "with enough eyes, all bugs are shallow", the security community as a whole has not gone back to basics and assigned the intentionality of software correctly: to its authors, rather than to its users. Once we admit that software works for those who wrote it, rather than the hapless ones running it, many of the problems of managing computer security get much clearer, if not easier! Furthermore, there is always the gremlin: discordia manifested as bugs. Software behaviors that no human intended are not only common, but ubiquitous. In these cases, software serves neither the user nor the author, but silently adds to the entropy of the universe all by itself.

Imagine if all the people that wrote the software you use every day were made visible. If you run a fully-free computer, right down to the BIOS, you would generally expect to see a group of people who are fully on your side. But then there is the router, and the firmware in your mouse and your telephone's baseband processor, and indeed the epic maze of software that powers the electrical grid to which your devices must connect, and so on. In truth, we do not like or trust many of the people writing the software on which our lives depend in so many ways. The fact that in the 21st century we still download and run programs that have arbitrary access to all of our personal files, data and often deep access to our operating systems is frankly madness. I'm not discussing sandboxing or virtual environments—these may be answers, but let us first clearly state the question: who does this machine serve?

The machine serves the authors of the software, not the person choosing to run it. If you have recently handed over permissions you were not entirely happy with while installing software on an Android phone, you have felt a sense of "No, I do not want you to do that—that's your desire, not mine!" Often we do not entirely trust those authors, their software or the hardware on which it runs. We literally cannot trust our possessions. Nobody wants to carry a snitch in their pocket, and yet we all do.

In an ideal world, all of our systems (and perhaps not only technological ones) would obey the Principle of Least Privilege. Rather than granting large, abstract powers to code (or other systems) and trusting there to be no bugs, we could grant powers in a more narrow way. Consider the all-too-typical "programs can see the entire filesystem" permission we grant to nearly all software dæmons: when something goes wrong, it results in disasters like Squid deleting your root filesystem when restarting. Why does Squid need the power to do that? Why even hold those keys?

So What Happens If We Choose Not to Trust Everybody?

There was a path not taken: capability-based operating systems. Capability-based operating systems really are machines for keeping secrets. They assume that all code is written by people we do not trust, and that the code may contain damaging bugs, if not outright assaults. "All code is untrusted code" creates a completely different role for the operating system in protecting users from the tools they themselves have downloaded. This is a realistic model of what software is like, an explicit model of distrust, unlike the vague trust we feel when installing software many other people are using, thinking "with enough eyes all bugs are shallow, so I'm sure this will be fine." That's not a great model of trust! Capability-based systems assume that all code may be evil, even code the user writes (bugs!), so it is, by default, untrusted in the most profound way.

A bare program can do nothing—no network, no filesystem access, nothing until it is granted permissions, and the operating system provides a smooth interface for an extremely granular approach to granting and managing these permissions. This is not like the Android model, where the application has access to high-level constructs like "your address book"; rather, this extends all the way from that level down to a low-level file-by-file access control model.

In an object capability model, a program cannot open a directory or search for files without a go-ahead from a user, although usually that go-ahead is implicit. For example, passing an open file handle as a command-line argument would grant the relevant program access to that file. A shell could manage those open file handles seamlessly for the user, opening files and passing their handles in a way that is seamless and transparent to the user. Without that permission, all attempts to access a file simply will be met by failure; as far as the software is concerned, that resource simply does not exist.

To get to this security position, one has to be very clear about the politics of software. Why was this code written? Who does it serve? Toward whose advantage does it work? Cui bono? Even if the only illicit advantage is a bug or two serving only the increase of entropy in the universe, we must admit that, when we get right down to it, if you did not write the software yourself, it's pretty much like giving somebody the keys to your house. But, it does not have to be this way.

This line of argument gives me an uneasy feeling every time I write it down using a modern Linux machine, knowing full well that every single thing I've used apt-get install to put on my computer could relaying my key presses, because once I install it, it acts as if it were me, whether I want that behavior or not, moment by moment.

The computer is a machine for keeping and seeking secrets.

______________________

Vinay Gupta is the release coordinator for Ethereum, a FOSS scriptable blockchain platform.