Work the Shell - Solve: a Command-Line Calculator
One thing that's always bothered me about Linux, and UNIX before it, is that there isn't a decent command-line calculator available. You know, something where you can type in solve 5+8 or, better, solve 33/5 and get the solution.
There's expr, but that's barely useful at all, and I've always been baffled that it's constrained to integer math to this day. No one has ever extended its functionality beyond the most rudimentary capabilities for shell script programming.
There's bc, which has the power we seek, but it has to be one of the most bizarre interfaces of any program in the Linux panoply, and there's nothing more frustrating than accidentally falling into bc and being unable to get out!
The third choice is dc, the so-called desktop calculator (really, that's what dc stands for), but that too is fundamentally flawed because it uses RPN (reverse Polish notation—really, it's named after Polish mathematician Jan Lukasiewicz). Not sure what that is? Well, here's a demonstration of how it doesn't work:
$ dc 1+1 Could not open file 1+1: No such file or directory $ dc -e 1+1 dc: stack empty $ dc -e 1 + 1 Could not open file +: No such file or directory
With this kind of burnout on a rudimentary math task, do you really care about learning an entirely new notation to figure out that 1+1=3? No, 2?
Of these three choices, none suffice, but bc does show promise because it can handle floating-point numbers and has the ability to specify how much post-decimal-point precision you seek. Learn its obscure notation, and you can calculate 1+1:
$ bc bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 1+1 2 quit $
The challenge with bc is to revamp how you interact with it—to put a wrapper program “in front” of the utility so that you can use it as a quick-and-dirty command-line utility.
There are two problems with using it that way as designed, as you can see here:
$ bc 1+1 File 1+1 is unavailable.
(The -q option gets rid of the FSF intro header.)
By default, bc offers up integer results only, so although you and I know that 3/2 = 1.5, bc shows it as 1, which makes it pretty darn useless for any precision calculations.
However, unlike the other calculation alternatives, bc does have the ability to be a bit more precise. The key is that you have to specify the scale, the number of digits after the decimal point that you want to see. Add that, and things change:
$ bc -q 11/7 1 scale=8 11/7 1.57142857 quit $
The challenge for us is to figure out a way to write a shell script wrapper that allows us not only to do simple calculations from the command line, but also have them solved as floating-point calculations. The goal is to be able to type something like solve 11/7, and have it display 1.57142857.
At this point, given my headline, I have an urge to write in some sort of rhyming slang, but I know my editor won't let me get away with it, so you're safe. Nonetheless, wrappers are an important concept and a big part of why Linux is so darn powerful as an operating system.
In many ways, UNIX and Linux supply all the tools you need, the rudimentary building blocks, and one of the purposes of shell script programming is to add the veneer, the pleasantry of a usable user interface. That's exactly what we're doing with our solve script if you think about it. Actually, doing mathematics in a shell script would be pretty tricky, but we certainly can transform a simple query into the more complicated sequence of commands needed to get bc to output what we desire.
The challenge though is that we're not simply adding a command flag or turning an express around; we need to capture the requested formula and inject it into a sequence of commands that we're feeding the underlying Linux utility via standard input (stdin).
I do this by using what's called a here document, as denoted with the notation << in a script. Recall that a notation like wc < letter.txt invokes the wc command and uses the contents of letter.txt as stdin for the command. The result is the number of characters, words and lines in the file, as if I'd actually typed in the file, letter by letter.
The << notation is a convenient way to have a similar remapping of standard input for the invoked command, but based on the material that's actually present in the command sequence, not a separate file.
As a result, the character sequence immediately following the << symbol is the end marker, not the filename. It works like this:
cat << EndOfInput This is a sample of the kind of trick you can do with a here document. Why is this cool? Because you can also expand variables ($PATH) and do other spiffo shell script hijinks. EndOfInput
Run this little script snippet (as a script), and you'll see the following:
$ sh samplepscript.sh This is a sample of the kind of trick you can do with a here document. Why is this cool? Because you can also expand variables (/bin:/sbin:/usr/bin:/usr/sbin) and do other spiffo shell script hijinks.
In our case, this also means you can move a command-line argument into the middle of a sequence of commands being sent to a core Linux command like bc. For example:
#!/bin/sh bc << EOF scale=4 $@ quit EOF
Believe it or not, that's the rudimentary solution to our challenge of writing a floating-point-capable command-line calculator. Check it out:
$ sh solve.sh 1+1 2 $ sh solve.sh 11/7 1.5714
Next month, we'll dig into useful refinements and make it a full-blown addition to our Linux toolkit. See you then!
Dave Taylor is a 26-year veteran of UNIX, creator of The Elm Mail System, and most recently author of both the best-selling Wicked Cool Shell Scripts and Teach Yourself Unix in 24 Hours, among his 16 technical books. His main Web site is at www.intuitive.com, and he also offers up tech support at AskDaveTaylor.com.
Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
51 min 26 sec ago
- Keeping track of IP address
2 hours 42 min ago
- Roll your own dynamic dns
7 hours 55 min ago
- Please correct the URL for Salt Stack's web site
11 hours 7 min ago
- Android is Linux -- why no better inter-operation
13 hours 22 min ago
- Connecting Android device to desktop Linux via USB
13 hours 51 min ago
- Find new cell phone and tablet pc
14 hours 49 min ago
16 hours 18 min ago
- Automatically updating Guest Additions
17 hours 26 min ago
- I like your topic on android
18 hours 13 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?