Doing Good and Preventing Bad
Thirty years ago I was working at the Hanford Nuclear Reservation. The position focused on systems programming, but ultimately, the environment rubbed off on me. I learned a lot about nuclear power as well as the more antisocial aspects of nukes. And, I recognized a problem. Any number of issues came to mind: operational danger, waste disposal and life-cycle cost are the first three. In any case, I knew nuclear power was not going to be, as President Eisenhower had said 20 years before, “too cheap to meter”.
I wanted to make sure everyone learned what I had regarding the issues of nuclear power. But, I also wanted everyone to learn all that I had learned about alternative energy sources such as solar, biomass and wind.
I quickly found I could not deal with both issues. Understanding and teaching about alternative energy is a huge job, as is pointing out flaws in nuclear power. So, I chose. Because of all I had learned at Hanford, I felt I was more qualified to talk about the problems of nuclear-power generation.
This was my choice for “preventing bad”. The more knowledge I could disseminate, the more likely it would be that the general population would see the issues, get involved and, in the long term, prevent the US from jumping deeper into the nuclear well.
Unfortunately, new “bad” appeared. As photovoltaics became cheaper, utility companies lobbied to make it harder for customers to sell power back to the grid. Once it was proved that intertie systems to sell power back to the utilities were safe and effective, utilities came up with rate schedules where the power you sold back was at a lower price than the power they sold you, even though peak output of solar systems corresponded with very high demand. They wanted to use the law to escape the consequences of failure by making intertie systems pay to decommission failed nuclear plants.
Now, back to software. When I first saw Linux (back when kernel versions started with a dot), I merely was looking at alternatives to the “real” UNIX systems. We had been in business publishing pocket reference cards and doing training and consulting on UNIX systems for about ten years.
I decided Linux was a lot more than simply a hobby project. I felt it showed great promise, so we changed direction from being a UNIX company to being a Linux company. With over 100 issues of Linux Journal under our belt, I feel we made the right choice.
Early on, a lot of my energy went into telling people about the virtues of Linux. Much like telling people that nuclear power costs too much, this was a hard sell at first. People didn't want to hear that Linux might be a better choice than what they were working with.
Now the days of telling people that Linux is a serious contender in the OS business are gone. Even if you still don't have it on your desktop, it is unlikely you will sit down for a web surfing session without getting involved with some Linux server.
The problem is that much like solar panels on your roof are a threat to the nuclear power industry, Linux is a threat to the OS status quo. One can spend a lot of time and energy counteracting the FUD.
As Linux is proving to be a worthy alternative, the same sort of “it works but you can't put it here” arguments are appearing as they did with intertie systems. They merely go by different names: DMCA and bogus software patents are two of those names. As with nukes and solar energy, competition is fine, but what we see is the use of the legal system to harass promising new alternatives.
This work of “preventing bad” needs to be done. But, the alternative is simply to keep on truckin' with Linux—continuing to identify places where Linux solves a problem and moving ahead with the solution. So many places have either a time-consuming manual system or a poorly implemented, non-Linux system that people easily can make a career of problem solving with Linux.
The Linux movement needs both. Someone has to deal with FUD, and someone needs to move Linux into new places. To go back to my nuclear power analogy, if no one were out there developing alternative energy technology, there would be no alternative to nukes, no matter how bad a picture the antinuke activist painted.
As for me, I did my time dealing with the FUD. A lot of it was fun work, but I have pretty much moved into the “just do it with Linux” camp. I would rather show someone a solution and let them choose than spend my time counteracting anti-Linux propaganda.
I moved to Costa Rica over a year ago. On my one-year anniversary I was thinking about how things have been different. Aside from the more obvious—things like I really need to be working on my Spanish—the biggest difference I have seen is that people are more open to solutions. There is less disposable money here than in the US and less anti-Linux FUD. Thus, it is easier here to listen to someone's problem, propose a Linux-based solution and have them accept it than in the US. It is easier to think Linus was right about World Domination—except that the US might be the last country to get Linux.
By the way, besides being Linux-friendly, all electricity in Costa Rica is produced from renewable sources including hydro, geothermal, wind and solar. Maybe these two issues fit together more than I thought.
Phil Hughes is the publisher of Linux Journal.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Dynamic DNS—an Object Lesson in Problem Solving
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Please correct the URL for Salt Stack's web site
2 hours 33 min ago
- Android is Linux -- why no better inter-operation
4 hours 49 min ago
- Connecting Android device to desktop Linux via USB
5 hours 17 min ago
- Find new cell phone and tablet pc
6 hours 15 min ago
7 hours 44 min ago
- Automatically updating Guest Additions
8 hours 53 min ago
- I like your topic on android
9 hours 39 min ago
- This is the easiest tutorial
16 hours 15 min ago
- Ahh, the Koolaid.
21 hours 53 min ago
- git-annex assistant
1 day 3 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?