Work the Shell - Resizing Images, Sort Of
This might be a peculiarity of how I work with the Web, taking screenshots and then wanting to scale them to fit my page (especially when they're full-screen images), but I find that I spend a lot of time calculating how to reduce and scale images down evenly.
For example, I might take a full-size screen capture of the window within which I'm writing this particular column just to find that it's 722 x 719 pixels across and down, respectively. But if I were to include it on my Weblog, I would want to reduce it down to no more than 600 pixels so that it doesn't break my site layout.
I actually could reduce the image within the screen capture application or use a secondary graphical app, but it turns out that Web browsers can scale images up or down based on explicit “height” and “width” attributes. For example, let's say that the doc window is called edit.png. Then, I could include the image on a Web page with:
<img src="edit.png" alt="editing a file" />
and it would work fine. To scale, it's easy, simply add those height and width parameters. To make it match the image itself, I'd use:
<img src="edit.png" alt="editing" height="719" width="722" />
However, as I said, it turns out that you actually can calculate different values, and the browser will scale it to match. To reduce the image 50%, for example, I would tweak it to read:
<img src="edit.png" alt="editing" height="359" width="361" />
So that's what I do on my site, and frankly, it's a pain.
Instead, what I'd really like is a utility that can figure out the current height and width of an image and then automatically scale it to the new value I desire based on a scaling factor. That's what we'll dig into for this column.
There are some terrific image manipulation packages available in Linux, most notably ImageMagick, but we don't need anything that fancy because the pedestrian, old, undersung file command can do the job for us. I'm going to be looking at only PNG (progressive network graphic) files, as those are very much the best for most Web uses, but it's worth noting that many Linux file commands have a harder time calculating image dimensions for JPEG images.
Here's an example:
$ file edit.png edit.png: PNG image data, 722 x 719, 8-bit/color RGB, non-interlaced
That's quite a bit of information actually, including the key elements—the dimensions of the image file itself. In this case, it's width x height, so 722 is the width, in pixels, and 719 is the height. These can be extracted from the output in a variety of ways, but the easiest is to use cut:
width="$(file $filename | cut -f5 -d\ )" height="$(file $filename | cut -f7 -d\ )"
If you try this, however, you'll find that the height is wrong. It has a trailing comma because cut is using spaces as the delimiter (which is what the weird-looking -d\ is specifying. The backslash escapes the shell interpreting the space as an arg delimiter. When you type this in, you'll want a space after the backslash and before the closing parenthesis for just that reason. It's fixable though, by using sed:
Now that we have numeric values, how do we scale them automatically? I like using the bc binary calculator, even though its interface is so crufty. Multiplying 722 by 0.50 (which is, of course, 50%), is done like this:
echo 722 * 0.50 | bc
except that the \* will be expanded. So, in fact, some judicious use of quotes addresses the problem neatly:
width="$(echo "$width * $multiplier" | bc)"
That's certainly more shell-scripty, and it works fine, except I found that with some implementations of bc, even adding scale=0, which theoretically should remove the trailing fractional element that results from the multiplication, didn't give us an integer return value. Again, a simple fix gives us the final script line:
width="$(echo "$width * $multiplier" | bc | cut -d. -f1)"
The same thing gives us the newly calculated “height”, and if the user specifies a multiplier that's less than one, it scales down. If you specify a greater value, you just as easily can scale up.
Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide