Is free and open code a form of infrastructure? How about the humans who write it?
I was looking at what my friend Stephen Lewis wrote in HakPakSak a few days ago specifically "...newspapers’ roles as public trusts and cornerstones of our informational infrastructure i.e. sources of solid information and independent commentary essential to informed citizenry, democratic government, effective public policy, and well-functioning economies".. What this brought up for me is the notion that human beings are themselves infrastructural; especially when they are constuctive contributors to the structure we call civilization.
Here in the free software and open source (FOSS) worlds, we're used to making, and employing, building materials that are products of human mentation. There are dependencies here, and the primary ones are on the human beings who write code. And patch it. And rewrite it. And continue to improve it, making it more and more useful.
In responding to an earlier piece of mine, Alex Flether writes about what he calls "inter-lockin" a kind of positive lock-in among constructive yet constantly changing parts. What he's doing there is exploring the market mechanics of open source development. These mechanics are hard to understand from a pure dollars-and-cents perspective. Several years ago a high-ranking executive at IBM told me it took a number of years to discover that the company couldn't tell its Linux kernel hackers what to do; and that if anything it was the other way around. Again, dependencies.
But what does the code itself depend on? What are the first sources of open code's enormous "because effects"?
I think this is a subject we need to bear in mind as we come to debate matters as wide ranging as media ownership (newspapers, for example) and health care, over the next few years (during and beyond the next major election cycle). In his piece on newspapers, Steve Lewis writes, "Bottom-line and marketing-oriented decisions eviscerate the staffing, resources, and integrity that make newspapers what they are at their best." That same kind of thinking would never allow free and open source code to be written in the first place. Oddly, many companies today, especially large ones, look toward FOSS as a way to cheap out to replace costly stuff with cost-free stuff. They don't get what free and open code is really about, and how it works, and why you need to value (and support) its sources. This lack of understanding is very similar to that of newspaper owners who cut costs by junking their most valuable sources which are not the advertisers.
The success of FOSS requires that we start looking at the sources of sources: human beings, doing constructive work. What kind of public policies might grow on the realization that the sources that matter most are the people who comprise as well as build civilization? What kind of businesses? What kind of civic and public institutions?
There's a bottom here a foundation. But you can't necessarily see it from the bottom line of a company's balance sheet.
Doc Searls is Senior Editor of Linux Journal
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
42 min 19 sec ago
- Reply to comment | Linux Journal
4 hours 41 min ago
- Yeah, user namespaces are
5 hours 58 min ago
- Cari Uang
9 hours 29 min ago
- user namespaces
12 hours 23 min ago
12 hours 48 min ago
- One advantage with VMs
15 hours 17 min ago
- about info
15 hours 50 min ago
15 hours 51 min ago
15 hours 52 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?