Transform Methods and Image Compression
One troublesome aspect of JPEG-like schemes is the appearance of “blocking artifacts,” the telltale discontinuities between blocks which often follow aggressive quantizing. The image on the left in Figure 6 was produced using a scalar multiple of the suggested luminance quantizer. Clearly visible blocks can be seen, especially in the “smoother” areas of the image.
JPEG operates on individual 8x8 blocks in the image and processes them independently. There can be significant loss of detail information within the individual blocks if the quantizing is aggressive. The cosine transform used in JPEG has properties which may (indirectly) help smooth the transition between neighboring blocks; however, the tracks of the block-by-block processing can be apparent when the blocks are reassembled and the image restored. In this case, it may be desirable to implement a smoothing scheme as part of the restoration process. This section considers the back-end smoothing procedure discussed in the book JPEG Still Image Data Compression Standard (see Resources 7).
The JPEG decompresser may have only rough estimates about much of the original frequency information, but it typically has fairly good estimates of the average level of gray in each original 8x8 block (because of the way quantizers are chosen). The idea is to use the average gray (DC-coefficient) information of its nearest neighbors to adjust a given block's (AC-coefficient) frequency information. Figure 4 illustrates the process with a single “superblock” consisting of a center 8x8 image and its nearest neighbors. The center block in the image on the right has been “smoothed” by the influence of its nearest neighbors (the surrounding eight 8x8 blocks).
The process on a more complicated image is illustrated in Figure 5. Here, the image is plotted as a surface where, at each pixel (y,x), the height of the surface represents the gray value. For a given 8x8 block, the 3x3 superblock consisting of its nearest neighbors contains 3282 total entries. The polynomial
+ a6y2 + a7x + a8y + a9
is fitted by requiring that the average value over each subblock matches the average gray estimate (this gives nine equations for the unknowns a1,...,a9). The polynomial defines a surface over the center block, which approximates the corresponding portion of the original surface. Figure 5 shows a surface in (a) and its polynomial approximation in (b).
The JPEG decompresser can perform the transform procedure on a polynomial approximation, obtaining a set of predictors for the frequency information of the original image. The original estimates passed by the compressor can be adjusted using these predictors in the hope of reducing the blocking problem.
In Figure 5, the lowest five frequencies were considered for adjustment by the predictors: zero values passed by the compressor were replaced by the predicted values (subject to a certain clamping). The procedure applied to an aggressively-quantized bird image appears in Figure 6. The deblock.m script (see Resources 4) performs the smoothing. The following code was used to generate the right-hand image:
> x = getpgm('bird.pgm'); % Get a graymap image > Tx = dct(x); % Do the 8 > QTx = quant(Tx, 4*stdQ); % Quantize, using % 4*luminance > Ty = dequant(QTx); % Dequantize > Tz = deblock(Ty); % Smooth > z = invdct(Tz); % Recover the image > imagesc(z); % Display the image
This kind of smoothing scheme is attractive, in part because of its simplicity and the fact that it can be used as a back-end procedure to JPEG (regardless of whether the original file was compressed with this in mind). However, JPEG achieves its rather impressive compression by discarding information. The smoothing procedure sometimes makes good guesses about the missing data, but it cannot recover the original information.
Features of a signal we wish to examine can guide us in our quest for the “right” basis vectors. For example, the cosine transform is an offspring of the Fourier transform, the development of which was, in a sense, a consequence of the search for basic frequencies with which periodic signals could be resolved.
The Fourier transform is an indispensable tool in the realm of signal analysis. When used as a compression device, we might wish it had the additional capacity of being able to highlight local frequency information—generally, it doesn't. The weights given by the Fourier expansion of a signal may yield information about the overall strength of the frequencies, but the information is global. Even if a weight is substantial, it doesn't normally give us any clue as to the location of the “time interval” over which the corresponding frequency is significant.
The interest in and use of wavelet transforms has grown appreciably in recent years since Ingrid Daubechies (see Resources 1) demonstrated the existence of continuous (and smoother) wavelets with compact support. They have found homes as theoretical devices in mathematics and physics and as practical tools applied to a myriad of areas, including the analysis of surfaces, image editing and querying and, of course, image compression.
In this section, we present an example using the Haar wavelet, which in one sense is the simplest of wavelets. The 16 basis elements in Figure 7 form a basis for the set of 4x4 images. Compare these with the cosine transform elements in Figure 1. One can begin to see the formation of elements with localized supports even at this “coarse” resolution level.
The simple (lossy) compression scheme used in the example is not as elaborate as the quantizing scheme used in JPEG. Basically, we throw away any weight which is smaller than some selected threshold value. In Figure 8, we have used this simple scheme on “bird” at several tolerance settings.
Setting a weight to zero in the transformed image is equivalent to eliminating the corresponding basis array in the expansion of the image. This illustrates a certain kind of simple-minded partial sum (projection) approach to compression, similar to the example in Figure 2. Examples of more sophisticated wavelet schemes can be done with Geoff Davis' Wavelet Image Compression Construction Kit (see Resources 2). Strang's article (see Resources 9) provides a short, elementary introduction to wavelets.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Build a Skype Server for Your Home Phone System
- Tech Tip: Really Simple HTTP Server with Python
- Why Python?
- A Topic for Discussion - Open Source Feature-Richness?
- Not free anymore
2 hours 29 min ago
6 hours 17 min ago
- Reply to comment | Linux Journal
6 hours 25 min ago
- Understanding the Linux Kernel
8 hours 39 min ago
11 hours 9 min ago
- Kernel Problem
21 hours 12 min ago
- BASH script to log IPs on public web server
1 day 1 hour ago
1 day 5 hours ago
- Reply to comment | Linux Journal
1 day 5 hours ago
- All the articles you talked
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?