Linux for Suits - Migrating a Mentality
The Internet will do for the 21st century what roads did for the 20th century and railroads did for the 19th century. That we need to build out the Net, to the maximum possible capacity, everywhere we can, is beyond question. That economic and cultural benefits will increase with connectivity and capacity is also beyond question. What's not beyond question is who should do it, how, where and by when.
In Korea, Japan, Denmark, the Netherlands and other countries, there is widespread public and private commitment to build out Net connectivity to as many people and places as possible, with as much capacity as possible. Means differ, but the goals are the same. Net build-out is a top priority.
Meanwhile, here in the US, Net build-out has been left up to cable TV and telephone companies that have not only squandered opportunities (according to TeleTruth, carriers have pocketed $200 million in federal subsidies for fiber build-outs that never happened), but have conflicted interests in the matter. Here's how home networking pioneer (and co-inventor of the spreadsheet) Bob Frankston puts it:
For those worried about competition, it would be hard to do worse than a system in which there is a fundamental conflict of interest. Today's transport providers have a very strong incentive, even a requirement, to maintain scarcity—especially when burdened with costs that do not increase the value of their product.
This is why fiber deployments like Verizon's FiOS are really about delivering high-definition television (and competing with cable TV companies), rather than delivering Internet capacity. Bob continues:
The fiber they are installing for FiOS is really a cable TV plant disguised as a network. It is a Passive Optical Network (PON) designed as a distribution system from a head end to the terminals at each home; though it does have capacity to send data back. A single fiber has the capacity for gigabits of traffic. There's so much capacity that they can simply allocate a portion of the capacity to emulating traditional cable TV. The 15mbps they reserve for their Internet service is less than 1% of that capacity!
He adds, “Direct and transparent funding is vital, but unlike the current regulated system, we do not have to grant the transport providers any exclusive rights—we can all add capacity.”
This is the key point. Adding infrastructural capacity for Internet isn't as hard or complex as building roads, bridges, dams, waste treatment facilities, railroad lines or power plants with large towers marching across the landscape. It's mostly a matter of planting conduit and fiber-optic cabling in the ground, or hanging cabling from poles that are already there—then deploying wireless coverage with fiber “backhaul”.
Bob Frankston's preference is for individuals and communities to build their own DIY (do-it-yourself) “plant” and connect in their own ways with each other, bypassing the cableco/telco duopoly and the “regulatorium” (his word, and it's an excellent one) that governs it. Local DIY networking is exactly the business of Indienet.dk in Copenhagen, which I wrote about last month. Not surprisingly, the Organisation for Economic Co-operation and Development (OECD) lists Denmark as the top country in broadband penetration and growth. The US is 12th in penetration and 17th in growth.
Here in the US, citizens are opting to use local governments for DIY Net build-out. The results are “muni” projects by cities and counties—hundreds, so far, across the country. In New Mexico, Sandoval County—home to seven Native American pueblos, a 33% Hispanic population and Intel's largest fabrication plant—is spending around $8 million (a remarkably low number) on a wireless build-out that intends to deliver gigabit-level connectivity to everybody in a region the size of Connecticut, yet notoriously lacking in amenities. In Utah, UTOPIA is a fiber build-out by 14 cities that wholesale capacity to retail service providers. One of those, ironically, is AT&T. In Vermont, Burlington Telecom is a city department currently building out a “triple play” (Internet, phone, television) retail offering.
Each project is unique, but all have two things in common: 1) they're doing what the carriers won't, and 2) they're doing it for every citizen, organization and business—and not for one company or one application.
Naturally, the carriers oppose the munis. They say these local governments are competing with business (which is highly ironic, given that the carriers have lived under government-maintained regulatory protection for the duration). So the carriers have been lobbying for anti-muni legislation at the state and federal levels. One of their successes is the Local Government Fair Competition Act in Louisiana, which was passed at the behest of the carriers to “level the playing field” between them and the munis. The law has had the effect, so far, of halting deployment of a fiber-based muni system in Lafayette that originated with voters.
Everywhere you look, the carriers are at odds with their own customers. Last November, 72% of the voters in Clarksville, Tennessee, approved the city's Department of Electricity's bid to build out a fiber-based network.
The arguments are not going to get any less heated, especially with a new US Congress that features a Democratic Party majority. In the last Congress, Net Neutrality legislative efforts, led by Democrats, were defeated by Republican majorities. Pro-Neutrality advocates will be looking for new legislation to be introduced. And, you can bet the carriers will fight that legislation by stepping up PR as well as lobbying efforts.
We can get past those arguments and simplify matters by answering one deep and simple question: is the Net public or private infrastructure? The munis say public. The carriers say private. To help find the answer, here is a list of familiar infrastructures, sorted into public, private and a mix of both.
Water (wells, reservoirs, distribution systems, dikes and levees).
Streets, roads, highways and bridges.
Waste water treatment.
Garbage disposal (mostly landfills).
Mixed public and private:
Garbage collection and recycling.
Electric power generation and distribution.
We can argue about what belongs on the list and what doesn't. But what's clear is that we need public infrastructures to support civilization.
Public infrastructure is manufactured nature. Reservoirs are man-made lakes. Irrigation canals are man-made streams. Waste treatment systems are man-made swamps. Roads and bridges are man-made geology. Power-generating plants are man-made systems for converting or extracting energy from nature. At their best, public infrastructures work as part of nature. Water capture, distribution and waste treatment should work inside the hydrologic cycle. Roads and bridges should conform to the supportive shapes and materials that make up the world's lands and waters.
You can make money with public infrastructure, but that's not infrastructure's main purpose. What you want is to make money because of infrastructure. Roads, water and waste treatment are all built primarily to support economies other than their own, if they even have any. Even our electric and gas utilities are not in business to support only themselves. They are in business because the rest of civilization can't get along without them. Public infrastructures are so quietly supportive to civilization that most of us give no more thought to them than we give to gravity or sunlight.
The Net is quietly supportive too. It doesn't advertise itself. It only connects devices and carries bits. It reduces to zero the distance between any two devices, or any two individuals. What we get billed for by phone and cable companies is access to the Net—not the Net itself.
I would argue that the Net is the most public infrastructure we've ever built, because it's the first to build on human nature. To illustrate this, Figure 1 is a diagram of civilization, borrowed from the Long Now Foundation.
I've shown this before, but I think it's important to show it again, because it shows how each layer supports the one above it, allowing the higher layer to move faster.
Let's look at the case of Linux, which grew out of the need to develop tools and building materials that are useful to everybody, rather than to just one company. This universality of purpose is what makes Linux infrastructural. The natural way Linux (and other open-source tools and building materials) grows also resembles that of a species. Here's how I explained this in a report last year:
Kernel development is not about Moore's Law. It's about natural selection, which is reactive, not proactive. Every patch to the kernel is adaptive, responding to changes in the environment as well as to internal imperatives toward general improvements on what the species is and does.
The species-like nature of FOSS (free and open-source software) is organized by community development culture, which gives rise to self-governance within communities—along with licensing that makes infrastructural choices as solid and useful as possible to commerce, to markets, to entire economies. Thus, infrastructure arises out of, and builds upon, the best of human nature.
All this was clearly evident last November, when I walked around the exhibition hall at ISPCON. Dozens of infrastructure deployment businesses (mostly selling local and regional wireless Internet equipment) built their systems on Linux. When some of folks at the booths saw Linux Journal on my badge, they wanted to tell me how they put Linux to use. In other cases, I had to ask. Usually the answer was “Oh, sure.” It was like asking if they wore clothes. The answer was that obvious.
Linux became ubiquitous because experts put it to use. Experts discovered the benefits quickly, and expertise around Linux eventually became a premium skill set. Jakob Frederiksen of Indienet.dk told me that Linux talent was cheap five years ago, but expensive today. (This is one more example of making money because of Linux rather than just with it.)
“All the significant trends start with technologists”, Mark Andreessen told me 11 years ago (when Netscape open-sourced Mozilla). He also said, “Technologists are driving progress, and it's easier to drive with Linux than with anything else.”
There is a lag between what technologists do first, and what the rest of us do later—especially when what technologists do is not strictly commercial, yet is deeply supportive of commercial activity. The way nature, culture, governance and infrastructure all support commerce is not apparent at the commercial level. Nor is the way commerce contributes back to infrastructure as well. Yet we can be sure that the experience of many Internet infrastructure builders in the world will contribute useful code to Linux and many other infrastructural building materials and tools.
Meanwhile, most business experts still don't grok the infrastructural nature of the Net, even though they put it to use every day. Like most of the rest of us, they're still stuck in the Net's equivalent of the 1880s, when electric power was just beginning to replace gas, and most people understood electricity in terms of its primary use, which was light. Even today, many electric utilities still carry the surname “Power & Light”. DC vs. AC was the Cable vs. Telco of its day.
In the long run, we learned to separate power from light—or, in modern parlance, transport from applications. As Bob Frankston puts it, “Edison originally sold light, but we now buy electricity and create our own lighting.” Today the equivalent of “light” for most of us is a combination of e-mail and Web browsing. A guy selling business-grade Internet service for our local cable company (Cox Communications) told me recently that most new business Internet customers use the Net to connect retail point-of-sale devices, and to do a combination of e-mail and browsing in their offices. They haven't discovered the full potential of high-speed symmetrical Internet service.
Of course, the carriers have hardly given any of us the chance. They have ignored the fact that the Net was designed in the first place as a symmetrical system, with equally fast and unencumbered upstream and downstream connections and speeds. As a result, almost none of us with a home or low-end business connection has ever experienced symmetrical service. The carriers optimized their systems from the beginning to anticipate and support consumption, not production. Moreover, business customers were charged a premium, just like they've always been charged premiums for “business” telephone and cable TV service.
Now let's talk about cost.
Fiber isn't free, but it's generally cheaper than the cost of planting it underground or hanging it from poles, and it's getting cheaper every day. More important, each strand can carry gigabits of data. The “first cost” of the Net, once fiber is installed, is blinking light. Routers, amplifiers and other infrastructural gear cost money to buy and to run, but the costs of the connections themselves are basically zero. And fiber cabling doesn't deteriorate with use. That's because there is no physical difference between fiber that's “dark” and fiber that's “lit”. Light does dim over distance, but it doesn't encounter the same degrading resistance that electrons meet as they pass through copper wiring. Fiber optic signals also emit no side radiation along the cabling. So, it's also about as “green” as a technology can get.
Wireless deployments are cheaper than fiber (no need to trench or hang cabling), and are capable of spanning distances where fiber deployments are impractical or impossible (such as across canyons of the Southwest US). But in both cases, the investments are highly durable, far less costly than most highway, water and waste treatment projects, and hugely supportive of countless activities, and markets of every sort.
The top price for FTTx (fiber to the whatever) that I've heard so far is about $2,500 per “drop”. This is about the same price you'll pay for a big flat TV screen that will be obsolete in three years, if not less. Meanwhile FTTx will only improve in value.
What about funding? Bob Frankston says, “Financing fungible connectivity in the same way you might finance macadam makes sense. Financing streets based on being able to stop cars and demand protection money is very different.”
The problem with “triple play” for munis is that it puts them in direct competition with their local telephone and cable companies. Worse, it makes communities come up with a commercial “revenue model” for public infrastructure. We don't burden roads and water systems with revenue models that do anything more than cover the expenses of maintaining them. Why should we place that burden on the Net?
Because the only models we know are provided by phone and cable companies. Also because we want to pay off debts, and “triple play” seems like a good way. Unfortunately, by emulating the carriers we not only adopt their business models but also their mentality. “Triple play” sees only three ways of making business with the Net, rather than limitless ways of making money because of the Net. By building out the Net, we're creating an ocean of connectivity, with frontage for everybody. The ocean's job is to support every kind of use, every kind of traffic, every application, every business, equally.
Perhaps the best model for munis is the municipal electric utility. Jim Baller, one of the top lawyers specializing in muni build-outs, writes:
More than 2,000 municipal electric utilities have thrived over the last century, contributing greatly to the well-being of their communities and America as a whole. Another 1,000 communities established their own electric utilities and sold them to the private sector, having achieved their goal of avoiding being left behind in obtaining the benefits of electricity. In contrast to these 3,000 successful municipalities, thousands of communities that waited for the private sector to get around to them stagnated or became ghost towns.
We are at the same crossroads today—except that only one road is built, and we need to build the other road across it.
Doc Searls is Senior Editor of Linux Journal. He is also a Visiting Scholar at the University of California at Santa Barbara and a Fellow with the Berkman Center for Internet and Society at Harvard University.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Interview with Patrick Volkerding
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide