The Connectivity Infrastructure
Craig Burton says it's the nature of infrastructure to commoditize itself and that it's the nature of commodities to ubiquitize as well. That's why he believes every company playing the infrastructure game in the computer and networking business needs an open-source strategy. If you want to drive ubiquity, open source is hard to beat. Just look at Linux. (He also points out that driving shareholder value is an orthogonal concern.)
A couple of days ago Kevin Marks pointed to the natural division between transport and protocols. But the more important distinction is between infrastructure (comprised of transport and protocols) and the stuff that depends on it, such as applications, "content" and companies in those businesses.
Toward that end is one of Kevin's links: Bob Frankston's Connectivity: What it is and why it is so important. (Background: Bob co-invented the spreadsheet with Dan Bricklin.)It's a good piece that has many provocative and important things to say. But one paragraph stands out for me as a locus of trouble:
We can now treat telephony and television as applications built upon any available connectivity. We are already used to the idea that we access any Internet service from any provider.
"We" are still a minority, to the advantage of AOL, Microsoft and entertainment industry heavies who want to leverage their application and content mass against lower layers of the infrastructure stack.
Not that they have it easy. Microsoft has successfully leveraged its monopoly position in operating systems to sell lots of mail and Web servers, but the company is still not in a position to change the underlying protocols on which those mail and Web services depend (though they're trying, with .Net). In the case of the entertainment industry, the only leverage they have is legality. They've had some success with the DMCA and similar efforts but only with controlling behavior, not with the evolution of network infrastructure. And in the case of AOL's instant messaging, there are no lower layers to be concerned about, which is why it's hard to conceive of a server-side instant messaging infrastructure, much less a business to build on it. (All due credit to Jabber, it ain't here yet -- but in time it will be).
That infrastructure is essentially an underlying condition that's easily conceived as a place. Bob calls the condition "connectivity". Larry Lessig calls it the Net's "end to end" architecture. Craig Burton has my favorite description for the place itself: a hollow sphere in which every point is visible to every other point across an empty space in the middle -- a vacuum where the virtual distances are zero. Fittingly, we conceive of the Net in place-like terms. We have "sites" and "locations" with "addresses" that are "on" the Net.
But here's the problem: what Bob and Larry and Craig talk about is obvious to us, but not to the majority of netizens to whom the Net is a remote place one "visits" by "dialing" there. The place-like nature of the Net is also not obvious to the telecom and cable backbone companies that still think of the whole thing as a distribution system -- a concept they share with the entertainment business (and, regrettably, many lawmakers).
Some broadband providers are no help, either. I was recently pleased to hear that a low-tech friend finally got DSL; but when I visited her home I found that the provider was SBC, which uses a goofy PPPOE client to bypass the PC's networking control panel, and therefore the Net could only be accessed by, of all things, "dialing up." Yes, the speed was faster, but the concept of a remote place one only visits was maintained. The idea that her PC is as connected to the Net as it is to the electric grid remained alien to her.
Of course, using PPPOE to maintain the dial-up model of the Net makes sense for SBC, which is still essentially a telephone company. Here's Bob again:
If we turn from the exciting reality of the Internet back to the other reality of Telecommunications we experience culture shock.
But for those who live within the complex world of telecom regulation day in and day out, the idea of going back to first principles isn't shocking, it's simply inconceivable. It represents a degree of reengineering that is dismissed as naïve and politically unrealistic. Once you accept the premises of the regulatory framework, all of its intricacies seem reasonable and necessary.
But not to every corporate creature that inhabits the regulatory jungle. The big ISP where I live is Cox Communications, which is the opposite of SBC in its attitude about customer connectivity. Right now Cox is rolling hundreds of thousands of cable-connected customers over to a new backbone with new servers that require new surnames for e-mail addresses and web site URLs. And it is actually simplifying connectivity by eliminating DHCP IDs and allowing customers to hook up simply with straight DHCP. The tech support guys even seem to enjoy hearing that our connection gets split to seven or more computers through a router, two Ethernet hubs and two 802.11b wireless base stations. They suspect I might be expert enough to help other customers. Which I do.
But this week I spent almost four days off the air after Cox threw the switch to the new system (from Excite@Home, which was bankrupt and going out of business). Why? Because Cox sent out fancy conversion kits that failed to explain in plain terms that the Net was now actually easier to access, even though everybody who used a Cox e-mail address or had a Cox-hosted web site would need to make some adjustments. The kit spent most of its glossy energies on those issues rather than simple connectivity.
Worse, the kits failed to explain was that in many cases the cable modem would need to be turned off for up to several hours, so that its semi-volatile memory would forget the old settings while the customer's account was being "reprovisioned."
Worst of all, the kit came with a CD that installs new copies of Microsoft Internet Explorer and Outlook Express (on PCs and Macs -- forget about Linux) and explains that e-mail addresses and web sites can only be changed if the user runs special scripts initiated by unique codes that come with the kit and can only be implemented using IE5 on Windows.
The net effect was to confound every customer, regardless of platform or technical competence. It also did nothing to improve popular understanding of what the Net is all about. A major lost opportunity.
I still believe that conceiving the Net as a distribution system will ultimately fail, regardless of how much legal and market leverage the AOLs, SBCs and RIAAs of the exert to maintain it. In the long run the concept will simply become irrelevant.
But we can hasten its demise by explaining, as often and as forcefully as we can, the difference between terminal distribution-oriented business models and ubiquitous commodity infrastructures that are good for every kind of business. Or, as Bob puts it:
Once we see that connectivity is the basic resource and that telephony and television are simply applications built on connectivity we can seize the opportunity to replace complex regulation with the power of the marketplace.
Doc Searls is Senior Editor of Linux Journal.
Doc Searls is Senior Editor of Linux Journal
|Where's That Pesky Hidden Word?||Aug 28, 2015|
|A Project to Guarantee Better Security for Open-Source Projects||Aug 27, 2015|
|Concerning Containers' Connections: on Docker Networking||Aug 26, 2015|
|My Network Go-Bag||Aug 24, 2015|
|Doing Astronomy with Python||Aug 19, 2015|
|Build a “Virtual SuperComputer” with Process Virtualization||Aug 18, 2015|
- A Project to Guarantee Better Security for Open-Source Projects
- Concerning Containers' Connections: on Docker Networking
- Problems with Ubuntu's Software Center and How Canonical Plans to Fix Them
- My Network Go-Bag
- Firefox Security Exploit Targets Linux Users and Web Developers
- Doing Astronomy with Python
- Build a “Virtual SuperComputer” with Process Virtualization
- diff -u: What's New in Kernel Development
- Three More Lessons
- Calling All Linux Nerds!