The New Building Trade
My friend Frank Saelua is a builder. Back in the old country—in his case, Samoa—Frank taught math. I have no idea how much he knows about math, but I do know he is fully capable of building anything or fixing any building. To put it another way, Frank hacks buildings.
Frank was the foreman of the crew who built our house. This was no ordinary construction job. The architect was a 78-year-old, who had been an apprentice of Frank Lloyd Wright and had many of the Great Man's qualities, including the prickly perversity behind such lines as “It's the job of the architect to bankrupt the builder.” While the house was a remodeling job, it was also completely original, turning a one-story ranch into a two-story modern, with cantilevered decks, whole walls of custom-made glass and almost nothing found in a catalog or at Home Depot.
Of course, mistakes were made, as they always are, and minds were changed. Much of the kitchen had to be redone. Pipes in the wrong places had to be moved. A bedroom wall bulged strangely and had to be flattened.
What amazed me was that Frank could look at all these problems—walls, windows, pipes and floors—as if they were modeling clay. As if nothing was a permanent structure. As if making or altering a building were merely a matter of tools and time. Need a door moved? Sure; stay out of the way and we'll do it this afternoon. Sorry about the dust.
I didn't fully understand the similarity between hacking Linux and hacking buildings until our own chief hacker and publisher, Phil Hughes, stayed at our place over the past few days, fixing just about everything that didn't work. This included my home-brew FM transmitter, which had baffled me for nearly a year (and I'm not stupid—except next to guys like Phil and our readers). Armed with a schematic, a meter and a soldering iron, he fixed the thing in less time than it took him to tell me how bad my tools were and what I should do to replace them. Not much different than his constant put-downs of all editing tools that don't measure up to vi.
Where Phil did his best work was on our Linux box, a no-name 133MHz PC clone. Using vi and other software tools, Phil turned it into a mean, clean Linux machine. When he was done, this 133MHz clone was routing e-mail, serving web pages, hosting files for an office full of Macs and PCs, and serving as a desktop for reading and writing .doc, .xls and .ppt files. In other words, he made it into a real Linux system. I submit that this is less an example of Linux doing good work than of a good worker using Linux and UNIX-grade tools to do what lesser materials and tools won't allow.
Phil looked at our Macs the way Frank loooked at the home tool-boxes in our garage, each filled with amateur-grade implements from Sears and Orchard Supply. He made me realize I know even less about building real computing solutions (in the literal sense of that hackneyed word) than I do about building houses. I also realized only those who truly know the virtues of vi and other “Real Tools” are in a position not just to solve problems, but to build a better world.
For proof, look at the Internet. Much of what we know and love about the Internet—such as the way it moves mail and serves up pages—was built by guys who love to solve hard problems with good tools. These are guys who look at computing problems the way Frank Saelua looks at a bad wall.
Of course, there is far more to the Internet than Sendmail and Apache. But I submit there is something highly significant about the success of those solutions—two applications truly deserving of the label—that isn't highly obvious, and that is the matter of origins. There is something about where those problem solvers came from which gave their solutions an enormous scope.
Where they came from was the UNIX world. Back in 1994, when I got my first working account with an ISP, I had a hard time getting my head around all the things my stupid old Macintosh could suddenly do all at once: browsing in multiple windows; archie, gopher and TELNET sessions; file transfers and even web service, thanks to Chuck Shotten's freshly hacked WebStar. One day, I was on the phone with one of the geeks who built the ISP when he interrupted me and said, “You gotta understand: this is UNIX. You can do lots of stuff at once. In fact, there's just about no limit to what you can do.” At the time, his whole business was built on cheap, used Sun machines and a pile of free software.
There's just about no limit to what you can do. Combine the scope of UNIX with a problem-solver's mentality and you've got a future that's equally promising and hard to see from the non-UNIX perspective. Trying to anticipate that future with non-UNIX concepts is like trying to frame a skyscraper with nothing but two-by-fours and sheet rock. And this is what Microsoft is up against right now. Whatever its ambitions, Microsoft will always come from the desktop. From the client. From one person working alone with a personal computer.
The future of computing won't be built by a company, even though we'll call it an industry. It will be built by builders and companies of builders. Both will operate on UNIX-informed concepts of not just operating systems and development models, but of understanding and solving problems and—critically, because this is a first—of doing business.
This new building trade won't be limited by one vendor's monopolistic insistence that everything be built only with its materials and tools. It won't be made out of one vendor's pre-fab parts. Most of all, it won't be built on shaky foundations that nobody can improve because their bricks and mortar can be touched only by their own manufacturer.
At too many companies today, even the best builders are limited by software and tools they can get only from the aisles of Microsoft Depot. This will end. Now the software business will turn into a real building trade, because the whole conversation will be liberated from hegemonistic corporate agendas by the builders themselves. Where they'll come from is the UNIX world view—one where there's just about no limit to what you can do.
This new trade is about designing, assembling, reassembling and fixing structures that are good because good professionals using good tools and materials are doing the work and learning constantly from each other about how to do it better—and doing it for the love of the work far more than the money.
Enabling this community is fundamental to everything we do at Linux Journal. Many of our writers are just readers who step forward because they have useful stories to tell. Calling them freelancers doesn't cover their value. Eric Kidd puts it this way in a post at Scripting.com: “For those of you who aren't familiar with Linux Journal, it is one of the best geek magazines currently available. In an age when Dr. Dobb's and Byte have utterly forsaken their technical roots, LJ still publishes actual source code—in some issues, well over half the articles have sidebars with program fragments. The articles are written by members of the Linux community.”
So there it is. You made us what we are, and for that, we owe you a hearty thanks. Now let's get back to work.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SourceClear Open
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide