EOF - Net Development
It's important to remember that the Web began as a project. As Tim Berners-Lee explained (in an August 1991 post to alt.hypertext, groups.google.com/group/alt.hypertext/msg/395f282a67a1916c), “The WWW project merges the techniques of information retrieval and hypertext to make an easy but powerful global information system.”
Nearly two decades later, the Web has done exactly that—and much more. While still the province of search engines and browsers, it also includes a collection of utilities we call the cloud, backed by massive storage and compute capacities residing in the racks of Amazon and Google. That's in addition to countless Web services, applications and other graces of development work (such as we cover in the preceding pages of this month's Linux Journal).
Yet no matter how large and encompassing the Web becomes, the Net remains the broader platform, the more encompassing environment. Everywhere society has digital foundations, the Net is there to make the connections. Today those connections span the whole world. So why don't we hear more about Net Development?
Although there are plenty of Internet protocols and applications outside the Web (IM, file syncing and sharing, and e-mail all jump to mind), we tend not to think of the Net as a platform. Perhaps that's because the Net's protocol suite is about transport rather than presentation or application. It doesn't care what datalinks (Ethernet, DSL, WDM, MoCA) or what physical or wireless media (copper, fiber, Wi-Fi, 3G, WiMAX) are used. It just makes a best effort over what's available.
And, that's the gating factor: what's available.
Today, most of us get on the Net through a phone or cable company that sells Net access as the third act in what they call a triple play. The first two acts are telephony and television. The Comcast Triple Play, for example, is pitched as “The best in TV, phone and Internet—three great services. One low bill. Hey, life just got a little easier.” This positions the Net as just another “service”, on par with television and telephony. Never mind that the Net can encompass both.
And they don't give us the whole Net. They cripple it with asymmetrical provisioning (even fiber deals default to higher downstream than upstream bitrates), blocked ports and lack of fixed IP addresses. If we want more, we have to move up to a “business” tier that begins with lower data rates and much higher prices—a shakedown racket that persists from the days when Ma Bell and national PPTs ruled the Earth.
The Net most of us know best is one where the Web is a wide-open platform for development, while the Net it runs on is “delivered” as a data spigot. Back when the carriers first realized that they were now ISPs, the Internet service they thought they'd be providing was biased by what they knew best and expected people would want: entertainment on the TV model. That usage materialized, but so did countless others. The carriers continue to miss a lesson of Web development that has thrived in spite of carriers' asymmetrical biases: that open platforms and without commercial biases support an infinitude of business. The Web is generative. (As Jonathan Zittrain puts it in The Future of the Internet and How to Stop It—for more, see “A Tale of Two Futures”, the EOF from July 2008.) They don't yet see how selling the Net as just one (crippled) “play” forecloses an infinitude of other plays.
But the tide is starting to turn. In November 2008, I attended a “brainstorm” conference in London, put on by the Telco 2.0 Initiative, the mission of which is to “catalyze change in the Telecoms-Media-Technology sector”. Every speaker and panelist inveighed, one way or another, against “triple play” and every other doomed-monopoly business model. Instead, they expanded on this advice in the Telco 2.0 Manifesto (www.telco2.net/manifesto):
New value lies in addressing the friction that exists in everyday interactions between businesses and consumers, and governments and citizens. Typical examples include: authenticating users, market research, targeting promotions, distributing goods and content, collecting payments and providing customer care....
Telcos collectively have assets that can address this situation: real-time user data, secure distribution networks, sophisticated payment processing capabilities, trusted brands, a near universal subscriber base, as well as core voice and messaging products.
Problem is, this still positions carriers as intermediaries between businesses and consumers. It ignores the enormous reservoir of production capacity on the “consumer” side, both by individual users and by developers—two parties who have been dancing away on the Web's wide-open floor.
The big money for carriers isn't just going to be in B2B and B2C. It will be in supporting all kinds of new activities made possible by a wide-open Net: one no longer biased toward single uses and no longer priced to discourage productive involvement by individuals and small businesses.
For that to happen, we need developers to step up with ideas that are Net-based and not just Web-based—ideas that help carriers leverage benefits of incumbency other than old monopoly businesses.
There are clues in handhelds. The best “smartphones” are computing devices on which voice is just one among thousands of applications. (See “Smarter Than Phones” in this issue's UpFront section.) More important, the user is in charge of more and more apps and what can be done with them.
Independence, autonomy and choice are going to be facts of connected life for every individual, sooner or later. So will unlimited data integration and production potential. The policies, preferences and terms of service that matter will be those asserted by individuals, not just those controlled by service providers and other sellers.
There are huge opportunities in figuring out ways to help individuals and businesses form new and symmetrical relationships—ones in which choice is maximized on both sides. But it won't happen until we make the Net as open as it was born to be. That's a huge project. And we've barely started on it.
Doc Searls is Senior Editor of Linux Journal. He is also a fellow with the Berkman Center for Internet and Society at Harvard University and the Center for Information Technology and Society at UC Santa Barbara.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide