EOF - Net Development
It's important to remember that the Web began as a project. As Tim Berners-Lee explained (in an August 1991 post to alt.hypertext, groups.google.com/group/alt.hypertext/msg/395f282a67a1916c), “The WWW project merges the techniques of information retrieval and hypertext to make an easy but powerful global information system.”
Nearly two decades later, the Web has done exactly that—and much more. While still the province of search engines and browsers, it also includes a collection of utilities we call the cloud, backed by massive storage and compute capacities residing in the racks of Amazon and Google. That's in addition to countless Web services, applications and other graces of development work (such as we cover in the preceding pages of this month's Linux Journal).
Yet no matter how large and encompassing the Web becomes, the Net remains the broader platform, the more encompassing environment. Everywhere society has digital foundations, the Net is there to make the connections. Today those connections span the whole world. So why don't we hear more about Net Development?
Although there are plenty of Internet protocols and applications outside the Web (IM, file syncing and sharing, and e-mail all jump to mind), we tend not to think of the Net as a platform. Perhaps that's because the Net's protocol suite is about transport rather than presentation or application. It doesn't care what datalinks (Ethernet, DSL, WDM, MoCA) or what physical or wireless media (copper, fiber, Wi-Fi, 3G, WiMAX) are used. It just makes a best effort over what's available.
And, that's the gating factor: what's available.
Today, most of us get on the Net through a phone or cable company that sells Net access as the third act in what they call a triple play. The first two acts are telephony and television. The Comcast Triple Play, for example, is pitched as “The best in TV, phone and Internet—three great services. One low bill. Hey, life just got a little easier.” This positions the Net as just another “service”, on par with television and telephony. Never mind that the Net can encompass both.
And they don't give us the whole Net. They cripple it with asymmetrical provisioning (even fiber deals default to higher downstream than upstream bitrates), blocked ports and lack of fixed IP addresses. If we want more, we have to move up to a “business” tier that begins with lower data rates and much higher prices—a shakedown racket that persists from the days when Ma Bell and national PPTs ruled the Earth.
The Net most of us know best is one where the Web is a wide-open platform for development, while the Net it runs on is “delivered” as a data spigot. Back when the carriers first realized that they were now ISPs, the Internet service they thought they'd be providing was biased by what they knew best and expected people would want: entertainment on the TV model. That usage materialized, but so did countless others. The carriers continue to miss a lesson of Web development that has thrived in spite of carriers' asymmetrical biases: that open platforms and without commercial biases support an infinitude of business. The Web is generative. (As Jonathan Zittrain puts it in The Future of the Internet and How to Stop It—for more, see “A Tale of Two Futures”, the EOF from July 2008.) They don't yet see how selling the Net as just one (crippled) “play” forecloses an infinitude of other plays.
But the tide is starting to turn. In November 2008, I attended a “brainstorm” conference in London, put on by the Telco 2.0 Initiative, the mission of which is to “catalyze change in the Telecoms-Media-Technology sector”. Every speaker and panelist inveighed, one way or another, against “triple play” and every other doomed-monopoly business model. Instead, they expanded on this advice in the Telco 2.0 Manifesto (www.telco2.net/manifesto):
New value lies in addressing the friction that exists in everyday interactions between businesses and consumers, and governments and citizens. Typical examples include: authenticating users, market research, targeting promotions, distributing goods and content, collecting payments and providing customer care....
Telcos collectively have assets that can address this situation: real-time user data, secure distribution networks, sophisticated payment processing capabilities, trusted brands, a near universal subscriber base, as well as core voice and messaging products.
Problem is, this still positions carriers as intermediaries between businesses and consumers. It ignores the enormous reservoir of production capacity on the “consumer” side, both by individual users and by developers—two parties who have been dancing away on the Web's wide-open floor.
The big money for carriers isn't just going to be in B2B and B2C. It will be in supporting all kinds of new activities made possible by a wide-open Net: one no longer biased toward single uses and no longer priced to discourage productive involvement by individuals and small businesses.
For that to happen, we need developers to step up with ideas that are Net-based and not just Web-based—ideas that help carriers leverage benefits of incumbency other than old monopoly businesses.
There are clues in handhelds. The best “smartphones” are computing devices on which voice is just one among thousands of applications. (See “Smarter Than Phones” in this issue's UpFront section.) More important, the user is in charge of more and more apps and what can be done with them.
Independence, autonomy and choice are going to be facts of connected life for every individual, sooner or later. So will unlimited data integration and production potential. The policies, preferences and terms of service that matter will be those asserted by individuals, not just those controlled by service providers and other sellers.
There are huge opportunities in figuring out ways to help individuals and businesses form new and symmetrical relationships—ones in which choice is maximized on both sides. But it won't happen until we make the Net as open as it was born to be. That's a huge project. And we've barely started on it.
Doc Searls is Senior Editor of Linux Journal. He is also a fellow with the Berkman Center for Internet and Society at Harvard University and the Center for Information Technology and Society at UC Santa Barbara.
Doc Searls is Senior Editor of Linux Journal
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
- Validate an E-Mail Address with PHP, the Right Way
- Weechat, Irssi's Little Brother
- Tech Tip: Really Simple HTTP Server with Python
- New Products
- Poul-Henning Kamp: welcome to
1 hour 14 min ago
- This has already been done
1 hour 15 min ago
- Reply to comment | Linux Journal
2 hours 1 min ago
- Welcome to 1998
2 hours 49 min ago
- notifier shortcomings
3 hours 13 min ago
4 hours 50 min ago
- Android User
4 hours 51 min ago
- Reply to comment | Linux Journal
6 hours 44 min ago
9 hours 34 min ago
- This is a good post. This
14 hours 47 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?