A Pain in the Person
At what point will we say "enough"?
To illustrate what a negative externality is, it helps to get literal about it. So let's start with horse shit—specifically, the Great Horse Manure Crisis of 1894. That was the year a Times of London reporter guessed that the city's streets would be buried under nine feet of manure within 50 years. In the Pulitzer-winning Gotham: A History of New York City to 1898 (Oxford: 1998), Edwin G. Burrows and Mike Wallace say horses deposited 2.5 million pounds of manure and 60,000 gallons of urine on 250 miles of city streets, every day. By 1900, possibly the peak year for horse-drawn transport, New York was served by 100,000 horses producing 1,200 metric tons of manure, daily. About half that much was collected daily and hauled, by horse, to Barren Island, off the coast of Brooklyn. The rest accumulated, bred flies, smelled to heaven and spread disease—all negative externalities. Add to those the costs of breeding, raising and feeding horses with hay grown on farms in nearby countryside and hauled—also by horse—to stables in cities.
But the positive internalities of the horse-drawn system outweighed the negative internal and external ones. Civilized city life required horse-drawn transport, so citizens put up with the bad stuff, as they always do. And, there were positive externalities as well. Cruelty to horses drove Henry Burgh to found the ASPCA in 1866. Horse poop and landfill expanded Barren Island until it eventually became part of Brooklyn. In 1930, it was paved with tarmac and runways to become New York's first major airport: Floyd Bennett Field. Today, it's a park by the same name.
But, the most positive outcome of horse-drawn transport was that it legitimized gas-powered mechanical transportation: cars and trucks. These too would produce negative externalities along with many positive ones. But let's not go there, because we know them all well. Instead, let's look at negative externalities we put up with today in the digital realm, starting with advertising.
Traditional advertising—the kind that runs on TV, radio and in print—wastes no more time, space and electricity than it takes to generate it. Although it does waste a substantial sum of all three, those all have physical limitations. The same is not true of advertising on-line, where virtual space is virtually infinite, and pollution by wasted messages is entirely ignored by those who create it. All they care about are "exposures" and "click-throughs". If an ad doesn't get read, it doesn't matter to the producers, because the costs of the waste are mostly external: borne by others. Last December, Fred Wilson gave a speech in which he fingered "data leakage" and "pollution" as a Major Issue (starting around 23 minutes in). This could be a turning point (Fred's an influential guy). Or, it could just be sign of the times that we'll ignore for another few years or decades.
We also suffer many negative externalities from the login/password convention, which is as stale today as horse-drawn carriages were in 1910. If you're Example.com, all you care about are the logins and passwords you require of your users—not the dozens or hundreds of logins and passwords the user has to remember, somehow. Right now, I'm in the early stages of changing many hundreds of logins and passwords on up to four different browsers, on several different computers, plus those on my phones and tablets. This is a huge project, slowed by a de-motivating sense of futility, plus a resentment about not coming up with something better. True, a variety of password managers are available to me, and I'm busy kicking their tires as well, but each of those brings its own set of vulnerabilities, chief among which is dependency itself: I become their vassal too.
I see the larger problem as centralization: a box so huge that we can hardly think outside of it, much less develop solutions out there. When Target Stores got hacked, and more than 110 million credit cards needed to be replaced, almost nobody (far as I know) looked at the sum costs of the security breach to the individual credit-card holders, much less at the need to come up with alternative approaches that would present bad guys with smaller surfaces to attack. Instead, all we got was hand-wringing and promises by feudal lords and their suppliers to build better castles, most of which consist of silo'd "loyalty" programs and other coercive systems for keeping their serfs—customers and users—trapped inside. Even every app, it seems, is a little castle of its own, and there are now more than a billion of those, both for iOS and Android.
The Internet was designed to solve this problem, starting in 1962. That was when Paul Baran came up with a network model designed to avoid the vulnerabilities inherent in the only kind of networks anybody knew at the time. Those were centralized ones, such as we got from phone companies and TV networks. Baran's new model was what he called distributed, and he illustrated it with the graphic shown in Figure 1.
Figure 1. Centralized, Decentralized and Distributed Networks
Every node on a distributed network would be independent. And, although a node or a link might be vulnerable, it would not bring down the whole network if it failed. Such a network would be a heterarchical, a virtue I unpacked here in April 2014. The Internet we have today is actually both decentralized and distributed, but at least it gives us a platform for creating distributed solutions to the problems of centralization. Linus and thousands of collaborators have used that platform to create and continuously improve Linux for 24 years, all operating independently. Yet, we also find Linux inside nearly every big centralized system on the Net—Twitter, for example. Because Twitter is centralized, it's easy for a government to shut it down. That's what happened in Turkey, back in March. But, that's beside the main point I want to make here, which is that distributed networks are composed of the same individuals that centralized systems burden with negative externalities, which those systems either rationalize or ignore.
In May 2014, the top European court ruled in favor of an individual's "right to be forgotten", and against Google, which produces an infinitude of search results, including many that some people would rather have the world forget. While the implications of the ruling were hotly debated, lost in the midst was the fact that Google's manners toward individuals often have been terrible. One example is StreetView, which provides the world with pictures of everybody's homes and businesses. This freaked out a lot of people, and in some cases, freaked out whole countries. Another is Google Glass, which has many fine uses, but also suggests that the wearer is recording others without permission. Google's lame manners in those cases are made possible by centralized systems that ignore or rationalize negative consequences to individuals.
At some point, outsourcing negative externalities to individuals is going to become a burden too high for those individuals to bear, and a tipping point will be reached. When that happens, we'll start to see some forward motion toward creating the distributed individual-first solutions that Paul Baran first drew for us more than a half century ago.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide