Fixing Photo Contrast with The GIMP
In my last article (LJ, February 2003), I described how to improve candid flash photos by removing red-eye using the GNU Image Manipulation Program (GIMP). In this article, I present another GIMP gem for fixing your photographs: using a digital split neutral density filter to repair bad pictures resulting from shooting high-contrast scenes.
The human eye is a remarkable image capture instrument. It is able to view a scene with a large dynamic range (range of luminosity or brightness) and to discern detail in both bright highlights and dim shadows. Dynamic range in photography is often measured in stops, where each stop represents a doubling or halving of light. Humans can discern detail in a scene with about 14 stops of dynamic range. Film and digital capture sensors are not as adept. Slide film typically can handle around 5-6 stops. Detail in areas below the lower limit is blocked up into dark shadows, and detail above the upper limit shows up as blown-out (completely white) highlights. Negative film does a bit better at 9-10 stops, and some high-end digital cameras (DSLRs) can do even slightly better than that. Typical consumer digicams fare somewhere in the lower middle of the pack and capture about 6-9 stops of detail, depending on the bit depth used in the digital capture process, the sensor size and a few other factors.
Knowledgeable photographers often have dealt with the limited dynamic range of their equipment by trying to compress the dynamic range of the scene they are photographing using fill-flash, lighting or reflectors to light up shadows or special filters, such as a split neutral density filter (sometimes also referred to as a graduated neutral density filter) to darken the highlights. An example of such a filter is shown in Figure 1. It is an accessory you can attach to the front of your lens. It has a clear side and a dark gray side, with a small continuous transition zone dividing them. The dark part of the filter has the effect of reducing light by 1 stop, 2 stops or more, depending on the strength of the filter. When the camera is set up for a high-contrast shot (e.g., a sunset), the filter is positioned in front of the lens so that the dark part covers the highlights (e.g., the sky) and the clear part covers the rest of the image (e.g., everything below the skyline). The photographer then can meter the exposure for the shadows. If the filter is positioned correctly, the metering is accurate, and the photographer has knocked on wood, thrown a pinch of salt over his shoulder and said a short prayer, the whole image will come out properly exposed.
Most casual shooters won't be bothered to carry around split neutral density filters and use them. In such situations, a compromise exposure is the only real option. A typical programmed auto-exposure metering system often will set an exposure that takes the middle road, losing detail at both ends of the luminance range. If you're willing to control the exposure yourself, follow a rule of thumb that is oft-repeated by photographers shooting slide film: expose for the important highlights. It often will be possible to rescue some detail from the shadows later, but once highlights are blown out, there's nothing that can be done to recover that detail. Remember that the rule says “important highlights”. If you are taking a picture of a sunset, you want to preserve the texture and detail in the clouds, which are brilliantly lit by the setting sun; if your main subject is a moose standing in a field at sunset then you'd probably rather have the detail in the moose's fur, and let the cloud detail fare as it will.
Although you can't recover detail that is completely clipped in such exposures, it is often possible to tweak an image to rescue a fair amount that is lurking in the highlights or shadows. The process in traditional wet-film processing is called dodging and burning. When making a print from a negative, parts of the paper are exposed more or less than the rest to hold details in highlights or pull detail from shadows. These sort of machinations used to be reserved for advanced darkroom enthusiasts. However, now anyone with a copy of The GIMP can do all of this and more with considerable ease.
Let me illustrate with the following example: a Utah sunset, shown in Figure 2 loaded into a GIMP window. I had followed the sage advice and exposed for the clouds and highlights on the cliff face and allowed the foreground to go quite dark. Using the LAB decompose plugin, I can decompose this RGB image into the LAB constituents. Of these, the L channel shows the full range of luminance values carried in the image. As you can see from Figure 3, there is a considerable amount of detail in the foreground trees, which in the original image look almost completely blocked up. This is good, but how do I pull out this detail, while retaining the beautiful detail and color of the cliffs and clouds?
The technique for rescuing that shadow detail is a bit like the digital equivalent of using a split ND filter. I combine two versions of the same scene, where each version has been optimized for either highlights or shadows. The technique makes use of layers and layer masks in The GIMP, so it is important to have a basic understanding of what these are beforehand. The next section introduces these concepts and provides a high-level overview of how the overall technique works.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide