Work the Shell - Conditional Statements and Flow Control
The last of the basic building blocks of shell scripting are conditional statements, allowing you to decide programmatically whether to execute a block of statements based on a logical test, and flow control statements, the great innovation from the earliest days of programming where you could have a block of code execute more than once. We explore both of these in this column and, finally, are done with the proverbial Lego blocks of scripting, allowing us to start exploring how to solve complex scripting problems with novel and unique combinations of simple statements.
The most obvious conditional statement is if-then-else, which in shell scripting looks like:
if condition ; then ; statements ; else ; statements2 ; fi
Of course, you'd usually see this on multiple lines, so it's more likely to look like this:
if condition; then statements else statements2 fi
There are some variations on this, including safely omitting any sort of else clause, but more interestingly, you can “chain” conditionals together with an else if structure:
if condition; then statements elif condition2 ; then statements2 fi
That's perfectly valid and, worth noting, functionally different from the structure:
if condition then statements if condition2; then statements2 fi fi
The difference will be obvious to anyone who has programmed before. In the first example, statements2 would execute if condition was false and condition2 were true. In the latter example, however, statements2 would be executed only if condition were true and condition2 were true. Subtle, but very important!
Specific logical conditions can take on a wide variety of appearances, because the only requirement for a conditional expression is that it return zero if the evaluated condition is false and nonzero if it should be considered true. Indeed, there are commands in Linux called false and true, so you can use statements like “if true; then....” Most conditions, however, are built around the invaluable test command, with its many different flags and options.
Want to compare two string (text) values? You could use:
if test $myvar = "exit" ; then
or its shortcut alternative of:
if [ $myvar = "exit" ] ; then
Compare two numeric values with:
if test $numval -lt 10 ; then
There's also a world of file and variable tests available in the test command too, including -r to test if a file is readable, -e to see if it exists at all, -s to see if the file exists and has a nonzero size, and -d to test for a directory and -f to test for a regular file.
So if you want to differentiate whether $filename is a file, directory or other file type, you could use a statement sequence like:
if test -f $filename ; then echo "$filename is a regular file" elif test -d $filename ; then echo "$filename is a directory" else echo "$filename is neither a file nor a directory." fi
Check out the test man page (use man test) to read about all the many different conditionals you can use in a shell script.
There are a number of different looping and flow control structures above and beyond simply the if-then-else conditional, luckily, and here are the big three:
for x in y; do; statements; done
while x; do; statements; done
case x in ; condition1) statements ;; condition2) statements ;; esac
There are more conditional statements, but you'll find that in the vast majority of cases, having for loops, while loops, case statements and if-then-else statements will serve as the building blocks of even the most complex script.
The for loop is particularly useful in its variations. Want to step through the parameters given to the shell script itself? Use something like this:
for value ; do ; statements ; done
Want to step through a set of matching filenames for a given pattern? Here's how to do that in a script:
for filename in *.c ; do statements done
Let's look at how a couple of these can be combined in useful ways, rather than just duplicate the man page, however. Here's a simple script that examines each entry in the current directory, indicating whether it's a file or directory:
for name in * do if [ -f "$name" ] ; then echo "$name is a file" elif [ -d "$name" ] ; then echo "$name is a directory" else echo "$name is neither a file nor directory" fi done
For illustrative purposes, let's try another version of this script, one that recognizes *.c as C source files, *.h as included header files and *.o as intermediate object files, but this time we'll use the case statement:
for name in * do case "$name" in *.c ) echo "$name is a C source file" ;; *.h ) echo "$name is a header file" ;; *.o ) echo "$name is an object file" ;; esac done
From a readability perspective, the case statement is hard to beat!
Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Interview with Patrick Volkerding
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide