Work the Shell - Solve: a Command-Line Calculator
One thing that's always bothered me about Linux, and UNIX before it, is that there isn't a decent command-line calculator available. You know, something where you can type in solve 5+8 or, better, solve 33/5 and get the solution.
There's expr, but that's barely useful at all, and I've always been baffled that it's constrained to integer math to this day. No one has ever extended its functionality beyond the most rudimentary capabilities for shell script programming.
There's bc, which has the power we seek, but it has to be one of the most bizarre interfaces of any program in the Linux panoply, and there's nothing more frustrating than accidentally falling into bc and being unable to get out!
The third choice is dc, the so-called desktop calculator (really, that's what dc stands for), but that too is fundamentally flawed because it uses RPN (reverse Polish notation—really, it's named after Polish mathematician Jan Lukasiewicz). Not sure what that is? Well, here's a demonstration of how it doesn't work:
$ dc 1+1 Could not open file 1+1: No such file or directory $ dc -e 1+1 dc: stack empty $ dc -e 1 + 1 Could not open file +: No such file or directory
With this kind of burnout on a rudimentary math task, do you really care about learning an entirely new notation to figure out that 1+1=3? No, 2?
Of these three choices, none suffice, but bc does show promise because it can handle floating-point numbers and has the ability to specify how much post-decimal-point precision you seek. Learn its obscure notation, and you can calculate 1+1:
$ bc bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 1+1 2 quit $
The challenge with bc is to revamp how you interact with it—to put a wrapper program “in front” of the utility so that you can use it as a quick-and-dirty command-line utility.
There are two problems with using it that way as designed, as you can see here:
$ bc 1+1 File 1+1 is unavailable.
(The -q option gets rid of the FSF intro header.)
By default, bc offers up integer results only, so although you and I know that 3/2 = 1.5, bc shows it as 1, which makes it pretty darn useless for any precision calculations.
However, unlike the other calculation alternatives, bc does have the ability to be a bit more precise. The key is that you have to specify the scale, the number of digits after the decimal point that you want to see. Add that, and things change:
$ bc -q 11/7 1 scale=8 11/7 1.57142857 quit $
The challenge for us is to figure out a way to write a shell script wrapper that allows us not only to do simple calculations from the command line, but also have them solved as floating-point calculations. The goal is to be able to type something like solve 11/7, and have it display 1.57142857.
At this point, given my headline, I have an urge to write in some sort of rhyming slang, but I know my editor won't let me get away with it, so you're safe. Nonetheless, wrappers are an important concept and a big part of why Linux is so darn powerful as an operating system.
In many ways, UNIX and Linux supply all the tools you need, the rudimentary building blocks, and one of the purposes of shell script programming is to add the veneer, the pleasantry of a usable user interface. That's exactly what we're doing with our solve script if you think about it. Actually, doing mathematics in a shell script would be pretty tricky, but we certainly can transform a simple query into the more complicated sequence of commands needed to get bc to output what we desire.
The challenge though is that we're not simply adding a command flag or turning an express around; we need to capture the requested formula and inject it into a sequence of commands that we're feeding the underlying Linux utility via standard input (stdin).
I do this by using what's called a here document, as denoted with the notation << in a script. Recall that a notation like wc < letter.txt invokes the wc command and uses the contents of letter.txt as stdin for the command. The result is the number of characters, words and lines in the file, as if I'd actually typed in the file, letter by letter.
The << notation is a convenient way to have a similar remapping of standard input for the invoked command, but based on the material that's actually present in the command sequence, not a separate file.
As a result, the character sequence immediately following the << symbol is the end marker, not the filename. It works like this:
cat << EndOfInput This is a sample of the kind of trick you can do with a here document. Why is this cool? Because you can also expand variables ($PATH) and do other spiffo shell script hijinks. EndOfInput
Run this little script snippet (as a script), and you'll see the following:
$ sh samplepscript.sh This is a sample of the kind of trick you can do with a here document. Why is this cool? Because you can also expand variables (/bin:/sbin:/usr/bin:/usr/sbin) and do other spiffo shell script hijinks.
In our case, this also means you can move a command-line argument into the middle of a sequence of commands being sent to a core Linux command like bc. For example:
#!/bin/sh bc << EOF scale=4 $@ quit EOF
Believe it or not, that's the rudimentary solution to our challenge of writing a floating-point-capable command-line calculator. Check it out:
$ sh solve.sh 1+1 2 $ sh solve.sh 11/7 1.5714
Next month, we'll dig into useful refinements and make it a full-blown addition to our Linux toolkit. See you then!
Dave Taylor is a 26-year veteran of UNIX, creator of The Elm Mail System, and most recently author of both the best-selling Wicked Cool Shell Scripts and Teach Yourself Unix in 24 Hours, among his 16 technical books. His main Web site is at www.intuitive.com, and he also offers up tech support at AskDaveTaylor.com.
Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide