Work the Shell - When Is “Good Enough” Good Enough?
Last month marked the end of my series about writing a Blackjack game as a shell script, and I don't know about you, but I had a good time with the development process and have even learned a bit more about the game itself. I received a number of fun e-mail messages from readers about the column, but I also received one that was most thought provoking.
The author criticized me for using less-than-optimal algorithms for things like my shuffle routine, for using poor scripting style and generally questioned how much I really knew about shell script programming in the first place.
Having lived on the Internet for almost 30 years now, I'm quite familiar with flames and hostile e-mail, with people nit-picking, focusing on the molecules of the leaf without ever even knowing there's a forest ahead, but this message still got me thinking about the practice of scripting and of programming in general.
To quote a colleague of mine, Ken McCarthy, what's a better strategy, imperfect action or perfect inaction?
Although the Bourne Again Shell is remarkably capable and certainly has all the basic programmatic structures of more sophisticated programming languages, I think it's nonetheless fair to say that it's a lightweight, even throw-away programming environment. You don't write large, complex or mission-critical applications as shell scripts, do you?
That's how I have always approached shell script writing—basically as a fast prototyping environment. You want to know how many lines are in a file? Use something like:
lines=$(wc -l < $filename)
Is that the most elegant and efficient solution? Probably not. Indeed, if you're doing that to test whether the file has nonzero content, you should be using the test command instead, but here's my point: it doesn't really matter.
That's what I mean by imperfect action. It's far, far better to get going with your script, to build a sloppy prototype, to get it done, than to tune, clean up, tweak, rewrite and optimize until it's 3am two weeks after your deadline. That's perfect inaction, right? The zeal to wait and wait and tweak and prod until it really is perfect, by which time you've missed your deadlines and goals.
So, when I received the criticism from this reader that I hadn't chosen the best possible algorithms and wasn't using what he thought was an optimal scripting technique, I was glad to see how he thought things should be done. But I also was unsurprised, as one of the greatest challenges I believe facing software developers is learning that in many cases and situations “good enough” really is, well, good enough.
Am I advocating that the next time you're writing the firmware for the in-flight controller on the Boeing 777 you should cut corners, skip testing and write crummy code so you can ship on time? Of course not. But you know what? If you're writing a testing framework cron script that will simply log the start of the test, invoke a series of MySQL queries and log the end of the test, well, yeah, in that case, relatively crummy code, code that works well enough, might just be exactly what's required.
I see this same perfectionist attitude with letters I occasionally receive from people who have bought my Wicked Cool Shell Scripts book. They haven't realized that writing a shell script is inherently an exercise in rough prototyping (hence the absence of sophisticated shell script development environments and testing frameworks), not a programming world where perfect code is the digital holy grail.
I also think that these rather snobbish software developers who worship elegance and dismiss software developed by people who are still stumbling their way through programming are doing a significant disservice to the world of computing.
Don't get me wrong, I appreciate an ingenious algorithm and snappy implementation when I read code, but most of the real innovations in software and applications come from the bubble gum and bailing wire crowd anyway, from the rough prototypes and the “barely beta” software that works well enough to demonstrate concepts and get the community to start experimenting anyway.
Some of you doubtless are a bit confused by this point in my column—puzzled that someone who is supposed to be teaching you tips and tricks of shell script programming is actually advocating sloppy coding and quick “throw-away” scripts. I ask instead why you're surprised at the idea that getting something out the door, solving the problem quickly and reasonably well, is often better than wasting—yes, wasting—time writing “the perfect script”?
This reminds me of a computer programming course I took many years ago at UC San Diego. Our challenge was to figure out the optimal sorting algorithm for a given situation and write a program that implemented it. Just about everyone in the class thrashed about, but I pulled out Knuth's Art of Computer Programming, picked out the algorithm he recommended, and typed it in, with an appropriate citation. I was penalized for “cheating” and had to argue adamantly that there was, in fact, no better way to learn how to choose the best sorting algorithm than to refer to the definitive work on the subject. Finally, the professor relented and gave me full credit.
Shell script programming is the same: shortcuts are always a good thing, efficiency is a measure of how fast you can solve a problem, and although tuning and tweaking can be rewarding as an intellectual exercise, 90% of the time it just doesn't matter at the end of the day.
Think about that. And ask yourself how you're working toward imperfect action rather than being trapped trying to achieve perfect inaction.
Next month, we'll go back to the nuts and bolts of shell scripting. But, please, don't expect me to write perfect little scripts or use the absolute best algorithm in the world for a given task. Indeed, sometimes my code will be inefficient, will spawn more subshells or child processes than entirely necessary, or might even have unnecessary loops and conditionals. Maybe, just maybe, that's okay?
Dave Taylor is a 26-year veteran of UNIX, creator of The Elm Mail System, and most recently author of both the best-selling Wicked Cool Shell Scripts and Teach Yourself Unix in 24 Hours, among his 16 technical books. His main Web site is at www.intuitive.com.
Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide