EOF - A Tale of Two Futures
We've long since lost count of free and open-source (FOSS) codebases. Last I heard, the sum was passing half a million. If we were to visualize these as a tree, it would resemble a banyan—wide and flat, a forest in itself, with one main trunk in the middle and smaller ones under its radiating branches. That main trunk would be Linux. The ground would be the Internet.
Why has this vast organism grown so broadly and rapidly, with no end in sight? Many answers may come to mind, but I suggest one that should be new to Linux Journal readers—as it was to me, when I first heard it from Jonathan Zittrain. That answer is generativity.
In his new book, The Future of the Internet—And How to Stop It (Yale University Press, 2008), Jonathan defines generativity as “a system's capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences”. In an earlier research paper, “The Generative Internet”, he explained, “The grid of PCs connected by the Internet has developed in such a way that it is consummately generative. From the beginning, the PC has been designed to run almost any program created by the manufacturer, the user, or a remote third party and to make the creation of such programs a relatively easy task.”
Linux and the FOSS portfolio fit this description, and so do its developers. In fact, I submit that both are even more generative than the wide-open machines they put to work. But, although it would be nice to see FOSS programmers credited with setting new records for generativity, what I'd rather see is those same programmers playing a leading role in preserving and expanding the Net's generative power.
According to Jonathan, the future does not default to rosy. In fact, he says the Net's generative growth is stalling. “The future unfolding right now is very different from its past”, he writes. “The future is not one of generative PCs attached to a generative Internet. It is instead one of sterile appliances tethered to a network of control.” Among those appliances, he lists Microsoft's Xbox 360, Apple's iPhone and TiVo DVRs. Thus, we stand at a fork between two futures: one generative, the other applianced—and the fight being won by the latter.
Linux and FOSS programmers are not innocent bystanders in this fight between futures. They contribute to both. As Jonathan puts it:
...generative and non-generative models are not mutually exclusive. They can compete and intertwine within a single system. For example, a free operating system such as GNU/Linux can be locked within an information appliance like the TiVo, and classical, profit-maximizing firms like Red Hat and IBM can find it worthwhile to contribute to generative technologies like GNU/Linux.
The generative/applianced divide is one between cultures as well as work, and we have geeks laboring on both sides of it. One side creates code that is both useful and re-usable—whether it's a leaf on the collective FOSS banyan tree, or humus in the networked ground on which that tree grows. The other side does what The Man tells it to do, even if the job is equipping an appliance to do something closed on top of open code.
What's strange is that both are mundane. They are not romantic. They do not supply fodder for partisan arguments. They are not box office. They are simply useful. This enormously productive (and reproductive) practicality is perhaps the most plain yet overlooked fact about FOSS development. Even within our community, we don't think much about how successful, common and purely generative our work is—and how much it has contributed to the growth and success of the Net. We just do good work, have fun and press on.
Yet there are these two sides. One thrives in the open world while the other disappears into machines. One makes stuff that is NEA: Nobody owns it, Everybody can use it, and Anybody can improve it. The other makes stuff that is OOO: One company owns it, Only its customers can use it, and Only the company and its captive partners can improve it.
Perhaps both will win, but maturing markets preponderate toward the simple and the predictable, rather than the complicated and the chaotic. For technology, that favors the applianced over the generative.
I've always been an optimist about generativity, even though I didn't know the word until a few months ago. But I see Jonathan's case, and it has me worried. There is no shortage of closed appliances that run Linux. Sometimes we don't even know they're around. Both my Sony Bravia 1080p flat-screen and the Dish Network set-top box that feeds it have Linux operating systems. And, both are built to prevent far more generativity than they enable.
Back in 2002, I wrote a piece titled “A Tale of Three Cultures” (www.linuxjournal.com/article/5912). One culture was FOSS hackers. One was embedded systems programmers. And the third was Hollywood, feeding popular culture. Toward the end of that piece, I offered a challenge: “And if we are asked by our employers and our government to replace the people's Net with a corporate digital rights management system, will we go about it as heads-down technologists? Or will we refuse to build it?”
That challenge still stands.
Doc Searls is Senior Editor of Linux Journal. He is also a Visiting Scholar at the University of California at Santa Barbara and a Fellow with the Berkman Center for Internet and Society at Harvard University.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide