# Transform Methods and Image Compression

In this section, several examples using the *cosine
transform* are presented. This transform is used by JPEG,
applied to **8x8** portions of an image. An
**NxN**
cosine transform exists for every
**N**, which exchanges
spatial information for frequency information. For the case
**N=4**, a given
**4x4** portion of an image can be written as a
linear combination of the 16 basis images which appear in Figure
1(a).

The transform provides the coefficients in the linear combination, allowing approximations or adjustments to the original image based on frequency content. One possibility is simply to eliminate certain frequencies, obtaining a kind of partial sum approximation. The implicit assumption in JPEG, for example, is that the higher-frequency information in an image tends to be of less importance to the eye.

The images in Figure 1 can be obtained from the scripts supplied on our web site (see Resources 4) as follows. We'll use “>” to denote the prompt printed by Matlab or Octave, but this will vary by platform.

Define the test image:

> x = round(rand(4)*50) % 4x4 random matrix, % integer entries in [0,50]

This will display some (random) matrix, perhaps

| 10 20 10 41 | | 40 30 2 12 | | 20 35 20 15 |

and we can view this “image” with the instructions:

> imagesc(x); % Matlab users > imagesc(x, 8); % Octave users

Something similar to the smaller image at the lower left in Figure
1(b) will be displayed. (We chose the **4x4**
example for clarity; however, the viewer in Octave may fail to
display it properly. In this case, either the image can be padded
before display or a larger image can be chosen.) Now ask for the
matrix of partial sums (the larger image in Figure 1(b)):

> imagesc(psumgrid(x)); % Display the 16 partial % sums

The partial sums are built up from the basis elements in the order shown in the zigzag sequence above. This path through Figure 1(a) is based on increasing frequency of the basis elements. Roughly speaking, the artificial image in Figure 1(b) is the worst kind as far as JPEG compression is concerned. Since it is random, it will likely have significant high-frequency terms. We can see these by performing the discrete cosine transform:

> Tx = dct(x, 4) % 4 % of x

For the example above, this gives the matrix

| 79.25 9.47 4.75 -11.77 | | 6.25 -19.69 -4.25 -11.60 | | 5.97 8.02 12.73 -15.64 |

of coefficients used to build the partial sums in Figure 1 from the
basis elements. The top left entry gets special recognition as the
*DC coefficient*, representing the average gray
level; the others are the *AC coefficients*,
AC0,1 through AC3,3.

The terms in the lower right of
**Tx** correspond to the
high-frequency portion of the image. Notice that even in this
“worst case”, Figure 1 suggests that a fairly good image can be
obtained without using all 16 terms.

The process of approximation by partial sums is applied to a
“real” image in Figure 2, where **1/4**,
**1/2** and **3/4** of the 1024 terms
for a **32x32** image are displayed. These can be
generated with calls of the form:

> x = getpgm('math4.pgm'); % Get a graymap image > n = length(x); % n is the number of rows % in the square image > y = psum(x, n*n / 2); % y is the partial sum % using 1/2 of the terms > imagesc(y); % Display the result

Our approximations retain all of the frequency information corresponding to terms from the zigzag sequence below some selected threshold value; the remaining higher-frequency information is discarded. Although this can be considered a special case of a JPEG-like scheme, JPEG allows more sophisticated use of the frequency information.

JPEG exploits the idea of local approximation for its
compression: **8x8** portions of the complete image
are transformed using the cosine transform, then each block is
*quantized* by a method which tends to suppress
higher-frequency elements and reduce the number of bits required
for each term. To “recover” the image, a dequantizing step is
used, followed by an inverse transform. (We've ignored the portion
of JPEG which does lossless compression on the output of the
quantizer, but this doesn't affect the image quality.) The matrix
operations can be diagrammed as:

transform quantize dequantize invertx------>Tx----->QTx------->Ty--->y

In Octave or Matlab, the individual steps can be written:

> x = getpgm('bird.pgm'); % Get a graymap image > Tx = dct(x); % Do the 8 > QTx = quant(Tx); % Quantize, using standard % 8 > Ty = dequant(QTx); % Dequantize > y = invdct(Ty); % Recover the image > imagesc(y); % Display the image

To be precise, a rounding procedure should be done on the matrix
**y**. In addition, we have ignored the zero-shift
specified in the standard, which affects the quantized DC
coefficients.

It should be emphasized that we cannot recover the image
completely—there has been loss of information at the quantizing
stage. It is illustrative to compare the matrices
**x** and
**y**, and the difference
image **x-y** for this
kind of experiment appears in Figure 3(f). There is considerable
interest in measuring the “loss of image quality” using some
function of these matrices. This is a difficult problem given the
complexity of the human visual system.

The images in Figure 3 were generated at several “quality” levels, using software from the Independent JPEG Group (see Resources 5). The sizes are given in bits per pixel (bpp); i.e., the number of bits, on average, required to store each of the numbers in the matrix representation of the image. The sizes for the GIF and PNG versions are included for reference. (“Bird” is part of a proposed collection of standard images at the Waterloo BragZone [see Resources 11] and has been modified for the purposes of this article.)

## Trending Topics

## Webinar

### Practical Task Scheduling Deployment

July 20, 2016 12:00 pm CDT

One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.

Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.

Join *Linux Journal*'s Mike Diehl and Pat Cameron of Help Systems.

Free to *Linux Journal* readers.

SUSE LLC's SUSE Manager | Jul 21, 2016 |

My +1 Sword of Productivity | Jul 20, 2016 |

Non-Linux FOSS: Caffeine! | Jul 19, 2016 |

Murat Yener and Onur Dundar's Expert Android Studio (Wrox) | Jul 18, 2016 |

Rogue Wave Software's Zend Server | Jul 14, 2016 |

Webinar: Practical Task Scheduling Deployment | Jul 14, 2016 |

- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script

## Geek Guides

With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.

This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.

Get the Guide