Work the Shell - Solve: a Command-Line Calculator Redux
Ooops! Two months ago, I started exploring how you can write a simple but quite helpful interactive command-line calculator as a shell script and ended the column with “Next month, we'll dig into useful refinements and make it a full-blown addition to our Linux toolkit. See you then!”
Unfortunately, last month, I got sidetracked with the movie The Number 23 and started another script looking at how to do numerology within the shell scripting environment. You'd think I was a typical programmer or something, being sidetracked and losing a thread by picking up another one. It reminds me of those glorious startup days from the late 1990s too, but that's an entirely different story.
Anyway, numerology can wait another month. This column, I'd like to complete the command-line calculator because, well, because it's so darn useful and simultaneously astonishing that there isn't a decent command-line calculator in Linux after all these years. I mean, really!
It was a while back, so let me remind you that the wicked short script to give you the rudimentary calculator is this:
#!/bin/sh bc << EOF scale=4 $@ quit EOF
That's it. Name it solve.sh, for example, and you can test it, as shown here:
$ sh solve.sh 1+3 4 $ sh solve.sh 11/7 1.5714
It's easy enough to alias solve to the shell command too:
alias solve="sh solve.sh"
alias solve="sh ~/bin/solve.sh"
As that'll work regardless of where you are in the filesystem (location-dependent commands are a typical shell gaffe).
What I'd really like, however, is to be able to go into a “solve” mode where anything I type automatically is assumed to be a mathematical equation, rather than have to type solve each time.
We've talked about shell script wrappers in the past, so you should recall this basic structure:
while read userinput do echo "you entered $userinput" done
That's too crude to use as of yet, but we easily can add a prompt so that it looks like a real program:
echo -n "solve" while read expression do echo "you entered $expression" echo -n "solve: " done
Look good? Actually, it's not. There's a subtle error here, one that's another common scripting mistake. The problem is that there are two echo commands in Linux: one that's the built-in capability of the shell itself, and one that's a separate command located in /bin. This is important because the built-in echo doesn't know what the -n flag does, but the /bin/echo command does. A tiny tweak, and we're ready to test it:
/bin/echo -n "solve: " while read expression do echo "you entered $expression" /bin/echo -n "solve: " done
Let's see what happens:
solve: 1+1 you entered 1+1 solve: ^D
That's more like it.
What we really want though, is a script that's smart enough to recognize whether you've specified parameters on the command line. If you have, it solves that equation, and if you haven't, it drops you into the interactive mode.
That's surprisingly easy to accomplish by testing the $# variable, which indicates how many arguments are given to the script. Want to see if it's greater than zero? Do this:
if [ $# -gt 0 ] ; then
One more refinement before I show you the script in its entirety: I want to have it quit if users type in quit or exit, rather than force them to type ^D to indicate end of file on standard input (which causes the read statement to return false and the loop to end).
This is done with a simple string comparison test, which you'll recall is done with = (the -eq test is for numeric values). So, testing $expression to see whether it is “quit” is easy:
if [ $expression = "quit" ] ; then
To make it a bit more bulletproof, it's actually better here to quote the variable name, so that if users enter a null string (simply press Return), the conditional test won't fail with an ugly error message:
if [ "$expression" = "quit" ] ; then
Because I like to make my scripts flexible, I've also added exit as an alternative to quit, which easily is done with a slightly more complicated conditional test:
if [ "$expression" = "quit" -o "$expression" = "exit" ] ; then
The -o is the logical OR statement in a shell conditional test, but I have a feeling you've already figured that out.
Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Nice article, thanks for the
8 min 33 sec ago
- I once had a better way I
5 hours 54 min ago
- Not only you I too assumed
6 hours 11 min ago
- another very interesting
8 hours 4 min ago
- Reply to comment | Linux Journal
9 hours 58 min ago
- Reply to comment | Linux Journal
16 hours 52 min ago
- Reply to comment | Linux Journal
17 hours 8 min ago
- Favorite (and easily brute-forced) pw's
18 hours 59 min ago
- Have you tried Boxen? It's a
1 day 51 min ago
- seo services in india
1 day 5 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?