EOF - The Power of Definitions
As a concept, freedom is usually defined two ways, one negative and one positive. Freedom from is the negative. Freedom to is the positive. Countless social and political causes grow around the need for freedom from—slavery, oppression, poverty, taxation—anything that limits our freedom to act, move, associate, choose.
The freedoms described by the Free Software Definition (www.gnu.org/philosophy/free-sw.html) are all positive:
The freedom to run the program, for any purpose (freedom 0).
The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.
The freedom to redistribute copies so you can help your neighbor (freedom 2).
The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.
These freedoms are also personal: “Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software”; and “a program is free software if users have all of these freedoms.”
Freedom is a profoundly human value. We are, more than any other species, devoted to originality, and we savor values that express it: intelligence, talent, choice, craft. Other animals make things too. Birds build nests, ants build hills, beavers build dams, bees build hives. But it is the nature of each to build these things the same ways as others within the species. Every human is different. What we value most in people is what makes them different from other people and what they do that's different. Freedom maximizes the scope of those differences and of our originality.
Software is one among countless other original human creations, but with an essential difference: it has no physical substance. Even the ephemeral creations we call music and speech are waves compressed within air. Software is something else. It is code. At a deeper level, it is binary math: ones and zeros.
Humans make sense of things through their bodies. Good is “up” and “light”, while bad is “down” and “dark”, because we are upright-walking diurnal animals. If we had the bodies of raccoons, we might say the opposite. Our worlds are full of metaphorical understandings grounded in our physical structures. When we say, “He picked my face out of a crowd”, we use the metaphor seeing is touching. When we say we “grasp a concept”, we use the metaphor understanding is grasping. What we do with our bodies shapes what we know in our minds and how we talk about it.
Yet software isn't physical. We need help understanding it, or we'll mess up by understanding it with misleading metaphors (for example, that it's a packaged good, like cereal). This is why we need to start with deep insights into software's nature, and into connections between that nature and our own. The Free Software Definition provides those. So does the companion concept of copyleft (www.gnu.org/copyleft/copyleft.html), which protects the liberties inherent in free software. This is why Richard M. Stallman calls free software a “social movement”, while positioning open source as a “development methodology” (www.gnu.org/philosophy/open-source-misses-the-point.html).
Today we live in a networked world not only filled with free software and open-source code, but also increasingly organized and defined by it. This has caused problems of perception that are similar to those that required the Free Software Definition 25 years ago.
The Internet, for example, has become a form of infrastructure, yet it lacks the physical qualities that have defined familiar forms of infrastructure in the past. Although it embodies qualities that are similar to real estate (“sites” and “domains” with “addresses”) and transport systems (“pipes” and “highways”), its supportive capacities are categorically limitless. This is why restricting our understanding of the Net to real estate and transport metaphors is a mistake.
Ask ten people to tell you what the Net is, and you'll get ten different answers. The same won't happen if you ask them what a road or a water system is. Or a phone or cable TV system. An irony in that last case is that telephony and television are now forms of data. In February 2009, here in the US, analog broadcast television will go the way of the steam locomotive. All TV broadcasting will be digital. Yet it will still be represented in familiar analog-like ways, with “channels” from “networks” and so on. Lost is the fact that these things are coming to homes by digital signaling using Internet protocols.
Where I live in California, burying service underground is a huge chore. The ground is rocky, and underground service culverts need to be eight feet deep, so there's room to keep electrical, cable TV and telephone services separate, just like they are on the poles above the ground. Yet the old analog phone wiring and coaxial TV cabling are no longer required. Being just data, telephony and television can be carried on fiber-optic cabling. And that cabling can run right next to high-voltage electrical wiring, as fiber-optic signaling is unaffected by proximity to electric current. The smart thing to do, then, is to trench the dimensions required for electric service, and run the rest over fiber-optic cabling alongside it.
But we're not ready for that, mostly because we still see the Net as a grace of telephone or cable company carriage—not as something that's essentially free and open. Yes, capital outlays are required, but the upsides of making those outlays are incalculably large, for everybody.
So our problem with the Net is very similar to the problem we had with software up to a quarter century ago: it's seen as essentially proprietary. We think of it as something owned and/or controlled by a big company and delivered as a “service” that we “access”. Although that's how most of us “get” the Net today, that understanding is at odds with the Net's free and open nature, and with our own as sources of value for the Net.
What we need now is a definition of the Net that is as deep and useful as the Free Software Definition's is for software. Without that definition, the Net will continue to be defined mostly by government, and by phone and cable companies.
Doc Searls is Senior Editor of Linux Journal and a fellow with both Berkman Center for Internet and Society at Harvard University and the Center for Information Technology and Society at the University of California, Santa Barbara.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Tech Tip: Really Simple HTTP Server with Python
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Linux Kernel Testing and Debugging
- Returning Values from Bash Functions
- Rogue Wave Software's Zend Server
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide