Many years ago the late Sixties revolutionary Jerry Rubin was a guest on Dick Cavett's show. Rubin went on about how everything--education, sex, driving, sleeping, coughing, you name it—was about politics. After listening patiently to Rubin's rantings for awhile, Cavett said, “Politics bores my ass off.” It brought the house down.
I remember thinking, “my sentiments exactly.”
I'm a Sixties guy too. I went to college in Greensboro, North Carolina from '65 to '69 and marched there against the Vietnam war, racial segregation, union busting and other mainstream sentiments of that time and place. Afterwards in New Jersey, I worked for a series of government social programs. When funding for those ran out, I launched my career as a journalist by raking muck for local newspapers, often by giving government a hard time.
To the degree that stuff was political, I was involved in politics. But, truth be told, politics bored my ass no less than Dick Cavett's. In junior high I avoided Civics class like it was Home Ec. And though I've always tried to exercise my right to vote, I've tended to favor none-of-the-above candidates. I have rarely voted for somebody who won.
So I've always been amazed at people who get off on politics or who even consider it important. Like a lot of folks around the Linux/Free Software/Open Source communities, I have a mile-wide libertarian streak. I agree with Jefferson that “government that governs best governs least”.
But there are people in government who want to govern something I value, and I believe most of our readers value as well—something that is inherently ungovernable and that derives its value from the consent of its ungoverned billions of constituents: the Internet.
In the March 7 issue of SuitWatch, my biweekly newsletter for Linux Journal, I ranted about the DMCA, which is terrible legislation, and about CARP, which proposes terrible regulations based on the DMCA. I closed the piece with a plug for Larry Lessig's efforts to define the Net as a commons, rather than as a piping system for highly controlled “content”:
I suggest we join Larry there. And I suggest he join us in a Million-Customer March on Washington.I'm serious. Isn't it time we gave Congress a friendly lesson in the democratic nature of real markets?
What followed supports the Law of Intended Consequences. Jeff Gerhardt of The Linux Show not only got turned on by the idea, but he picked up the ball and ran it for a touchdown and extra points.
The current manifestation of Jeff's effort, with help from Paul Jones (of Ibiblio), Eric S. Raymond and myself, is a public proposal for the creation of two entities—a consortium and a political action committee—built for the purpose of advocating the Net to lawmakers and regulators. The document explains itself, and will keep explaining itself every time we redraft it. So I won't expand here on what it says or proposes.
I will expand, however, on some of what I think about the inherent ungovernability of the Net.
As Monty of Ogg Vorbis put it to me a few weeks ago, “there's a reason it's called cyberspace, not cyberpipes”. Like outer space, it's infinite. Like inner space, it's personal. Unlike both, though, it exists between individuals, between things, between documents, nodes, services and other entities. Its hyperlinked nature is all between-ness. In fact it creates a new kind of space: inter space. And we're only beginning to explore it.
As David Weinberger (one of my Cluetrain co-authors) puts it in his brilliant new book, Small Pieces Loosely Joined, this new kind of space defies description in terms of more familiar spacial concepts:
Traditional space as a container, as the grand “outside” of everything that exists, is passive. But the Web in effect actively holds itself together. If I write a page, it becomes part of the Web, and thus extends the Web place, only if someone links to it. Otherwise it's simply a page no one will ever find. Through these billions of acts of will the Web is constructed and expanded.
Later he adds,
there's an important difference in the politics of space as well. In the real world, I can't just put a door from my apartment to my neighbor's so that anyone can go through. But that's exactly how the Web was built. Tim Beaners-Lee originally created the Web so that scientists could link to the work of other scientists without having to ask their permission. If I put a page into the public Web, you can link to it without asking me if it's all right with me, and without even letting me know you've done it.
Note his reference to “the politics of space”. This is not far from the same space John Perry Barlow advocated in A Declaration of the Independence of Cyberspace, back in 1996:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.
Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.
So why, if this space we've made is inherently ungovernable, do we make the effort to save it from government? Two short answers: 1) to save history some time and 2) to relieve ourselves of a pain in the ass.
The longer answer is far more important: To learn what it is we're making that's worth fighting for. I think we've only begun to understand it.
Doc Searls (firstname.lastname@example.org) is senior editor of Linux Journal.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide