# Transform Methods and Image Compression

This article has its origins in a data compression course we've been developing over the past few years at Auburn University. The course is elementary and begins with the basic (text) compression methods of Shannon and Huffman. Some of these methods can be appreciated with pencil-and-paper examples; others, such as images to be modified by compression, need some machine experimentation.

Students may choose to present a project as part of their course evaluation. We've seen various projects, including an amusing example of Huffman-on-a-hand-calculator, an overview presentation of PNG (Portable Network Graphics) and a project concerning smoothing in JPEG.

We will introduce the transform techniques of JPEG and wavelets, discuss some mathematical themes shared by these methods, and illustrate the use of a high-level linear algebra package in understanding such schemes. The images were generated using Octave and Matlab, primarily on GNU/Linux (x86) and Solaris (SPARC), but also on a Macintosh.

Data compression methods with zero information loss have been
used on image data for some time. In fact, the popular GIF format
uses an LZW scheme (the basic method used in UNIX
*compress*) to compress 256-color images. PNG is
more sophisticated and capable, using a predictor (or filter) to
prepare the data for a *gzip*-style compressor.
(Greg Roelofs has an introduction to PNG and some notes on patent
questions concerning GIF [see Resources 8].) However, applications
using high-resolution images with thousands of colors may require
more compression than can be achieved with these
*lossless* methods.

*Lossy* schemes discard some of the data
in order to obtain better compression. The problem, of course, is
deciding just which information is to be compromised. Loss of
information in compressing text is typically unacceptable, although
simple schemes such as elimination of every vowel from English text
may find application somewhere. The situation is different with
images and sound; in those cases, some loss of data may be quite
acceptable, even imperceptible.

In the 1980s, the Joint Photographic Experts Group (JPEG) was formed to develop standards for still-image compression. The specification includes both lossless and lossy modes, although the latter is perhaps of the most interest (and is usually what is meant by “JPEG compression”). G. K. Wallace has a paper (see Resources 10) discussing the standard in some detail.

The method in lossy JPEG depends on an important mathematical and physical theme for its compression: local approximation. The JPEG group took this idea and fine-tuned it with results gained from studies on the human visual system. The resulting scheme enjoys wide use, in part because it is an open standard but mostly because it does well on a large class of images, with fairly modest resource requirements.

JPEG and wavelet schemes fall under the general category of transform methods. The development of wavelet techniques has taken place more recently than the classical method in JPEG, and is a consequence of the never-ending search for “better” basic images.

Roughly speaking, the first step in lossy compression schemes
like JPEG and wavelets is to break down an image into a weighted
sequence of simpler, more basic images. At this stage, the image
may be reconstructed *exactly* from knowledge of
the basic images and their corresponding weights. The effectiveness
of the method depends to a great extent on the choice of the basic
images. Once a set of basic images, or *basis*,
has been chosen, arbitrary images can be replaced by equivalent
collections of weights. A basic image having a correspondingly
large weight is an indication of its characteristic importance in
the overall image. (The assumption here is that the basis images
have been *normalized*, so that they have the
same mathematical size.)

The mathematics behind this process is expressed in the language of linear algebra. There is considerable mathematical freedom in the choice of basis images; however, in practice they are usually chosen to exhibit features intrinsic to the class of images of interest. For example, JPEG chooses basic images designed to reflect certain classical spatial frequencies.

The process of using a basis to resolve an image into a
collection of weights is called a *transform*.
To simplify things, we'll consider gray-scale images (color is
discussed briefly in the conclusion), which can be represented as
**mxn**
arrays of integers. The range of values isn't important in
understanding the mathematical ideas, although it is common to
restrict values to the interval **[0,255]**, giving
a total of 256 levels of gray. As an example, Figure 3(a) shows an
image containing **256x256** pixels with 145 shades
of gray represented.

Mathematically, any basis for the space of
**mxn**
gray-scale images must contain exactly
**mn** images—the number
of pixels in an
**mxn**
image. Consequently, the transform of an
**mxn**
image will have **mn**
weights. The weights can be conveniently arranged into an
**mxn**
array called the *transformed image* even though
it isn't a true image at all.

The transformation process, in itself, is certainly not a compression technique (since the transformed image is the same size as the original), but it can lead to one. Suppose the basis images can be chosen so that, for a wide class of images, many of the weights turn out to be small: for a given image, set these small weights to zero and use the resulting array of modified weights to represent it. Since the transform of the image has been modified, it can be used only to approximate the original. How good is the approximation? That depends on how good the scheme is for throwing out nonzero weights, that is, on the appropriateness of the basis elements and the number of weights which can be discarded. JPEG and wavelet methods both employ this type of process and offer significant compression benefits, often with minimal impact on the quality of the reproduction. They differ in the choice of basis images, i.e., in the transform used, and subsequently in the method used to discard small weights. However, both share the idea of picking a basis that can efficiently represent an image, often using only a small number of its basic images.

## Win an iPhone 6

Enter to Win

## Geek Guides

Practical (and free!) books for the most technical people on the planet. Newly available books include:

**Plus many more.**

## Trending Topics

Geek Hide-away in Guatemala - Stay for Free! | Nov 26, 2015 |

Microsoft and Linux: True Romance or Toxic Love? | Nov 25, 2015 |

Non-Linux FOSS: Install Windows? Yeah, Open Source Can Do That. | Nov 24, 2015 |

Cipher Security: How to harden TLS and SSH | Nov 23, 2015 |

Web Stores Held Hostage | Nov 19, 2015 |

diff -u: What's New in Kernel Development | Nov 17, 2015 |

- Geek Hide-away in Guatemala - Stay for Free!
- Microsoft and Linux: True Romance or Toxic Love?
- Cipher Security: How to harden TLS and SSH
- Non-Linux FOSS: Install Windows? Yeah, Open Source Can Do That.
- Web Stores Held Hostage
- Firefox's New Feature for Tighter Security
- PuppetLabs Introduces Application Orchestration
- It's a Bird. It's Another Bird!
- diff -u: What's New in Kernel Development
- IBM LinuxONE Provides New Options for Linux Deployment