Is free and open code a form of infrastructure? How about the humans who write it?
I was looking at what my friend Stephen Lewis wrote in HakPakSak a few days ago specifically "...newspapers’ roles as public trusts and cornerstones of our informational infrastructure i.e. sources of solid information and independent commentary essential to informed citizenry, democratic government, effective public policy, and well-functioning economies".. What this brought up for me is the notion that human beings are themselves infrastructural; especially when they are constuctive contributors to the structure we call civilization.
Here in the free software and open source (FOSS) worlds, we're used to making, and employing, building materials that are products of human mentation. There are dependencies here, and the primary ones are on the human beings who write code. And patch it. And rewrite it. And continue to improve it, making it more and more useful.
In responding to an earlier piece of mine, Alex Flether writes about what he calls "inter-lockin" a kind of positive lock-in among constructive yet constantly changing parts. What he's doing there is exploring the market mechanics of open source development. These mechanics are hard to understand from a pure dollars-and-cents perspective. Several years ago a high-ranking executive at IBM told me it took a number of years to discover that the company couldn't tell its Linux kernel hackers what to do; and that if anything it was the other way around. Again, dependencies.
But what does the code itself depend on? What are the first sources of open code's enormous "because effects"?
I think this is a subject we need to bear in mind as we come to debate matters as wide ranging as media ownership (newspapers, for example) and health care, over the next few years (during and beyond the next major election cycle). In his piece on newspapers, Steve Lewis writes, "Bottom-line and marketing-oriented decisions eviscerate the staffing, resources, and integrity that make newspapers what they are at their best." That same kind of thinking would never allow free and open source code to be written in the first place. Oddly, many companies today, especially large ones, look toward FOSS as a way to cheap out to replace costly stuff with cost-free stuff. They don't get what free and open code is really about, and how it works, and why you need to value (and support) its sources. This lack of understanding is very similar to that of newspaper owners who cut costs by junking their most valuable sources which are not the advertisers.
The success of FOSS requires that we start looking at the sources of sources: human beings, doing constructive work. What kind of public policies might grow on the realization that the sources that matter most are the people who comprise as well as build civilization? What kind of businesses? What kind of civic and public institutions?
There's a bottom here a foundation. But you can't necessarily see it from the bottom line of a company's balance sheet.
Doc Searls is Senior Editor of Linux Journal
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?