Transform Methods and Image Compression

An introduction to JPEG and wavelet transform techniques using Octave and Matlab.

This article has its origins in a data compression course we've been developing over the past few years at Auburn University. The course is elementary and begins with the basic (text) compression methods of Shannon and Huffman. Some of these methods can be appreciated with pencil-and-paper examples; others, such as images to be modified by compression, need some machine experimentation.

Students may choose to present a project as part of their course evaluation. We've seen various projects, including an amusing example of Huffman-on-a-hand-calculator, an overview presentation of PNG (Portable Network Graphics) and a project concerning smoothing in JPEG.

We will introduce the transform techniques of JPEG and wavelets, discuss some mathematical themes shared by these methods, and illustrate the use of a high-level linear algebra package in understanding such schemes. The images were generated using Octave and Matlab, primarily on GNU/Linux (x86) and Solaris (SPARC), but also on a Macintosh.

Image Compression and Transforms

Data compression methods with zero information loss have been used on image data for some time. In fact, the popular GIF format uses an LZW scheme (the basic method used in UNIX compress) to compress 256-color images. PNG is more sophisticated and capable, using a predictor (or filter) to prepare the data for a gzip-style compressor. (Greg Roelofs has an introduction to PNG and some notes on patent questions concerning GIF [see Resources 8].) However, applications using high-resolution images with thousands of colors may require more compression than can be achieved with these lossless methods.

Lossy schemes discard some of the data in order to obtain better compression. The problem, of course, is deciding just which information is to be compromised. Loss of information in compressing text is typically unacceptable, although simple schemes such as elimination of every vowel from English text may find application somewhere. The situation is different with images and sound; in those cases, some loss of data may be quite acceptable, even imperceptible.

In the 1980s, the Joint Photographic Experts Group (JPEG) was formed to develop standards for still-image compression. The specification includes both lossless and lossy modes, although the latter is perhaps of the most interest (and is usually what is meant by “JPEG compression”). G. K. Wallace has a paper (see Resources 10) discussing the standard in some detail.

The method in lossy JPEG depends on an important mathematical and physical theme for its compression: local approximation. The JPEG group took this idea and fine-tuned it with results gained from studies on the human visual system. The resulting scheme enjoys wide use, in part because it is an open standard but mostly because it does well on a large class of images, with fairly modest resource requirements.

JPEG and wavelet schemes fall under the general category of transform methods. The development of wavelet techniques has taken place more recently than the classical method in JPEG, and is a consequence of the never-ending search for “better” basic images.

Roughly speaking, the first step in lossy compression schemes like JPEG and wavelets is to break down an image into a weighted sequence of simpler, more basic images. At this stage, the image may be reconstructed exactly from knowledge of the basic images and their corresponding weights. The effectiveness of the method depends to a great extent on the choice of the basic images. Once a set of basic images, or basis, has been chosen, arbitrary images can be replaced by equivalent collections of weights. A basic image having a correspondingly large weight is an indication of its characteristic importance in the overall image. (The assumption here is that the basis images have been normalized, so that they have the same mathematical size.)

The mathematics behind this process is expressed in the language of linear algebra. There is considerable mathematical freedom in the choice of basis images; however, in practice they are usually chosen to exhibit features intrinsic to the class of images of interest. For example, JPEG chooses basic images designed to reflect certain classical spatial frequencies.

The process of using a basis to resolve an image into a collection of weights is called a transform. To simplify things, we'll consider gray-scale images (color is discussed briefly in the conclusion), which can be represented as mxn arrays of integers. The range of values isn't important in understanding the mathematical ideas, although it is common to restrict values to the interval [0,255], giving a total of 256 levels of gray. As an example, Figure 3(a) shows an image containing 256x256 pixels with 145 shades of gray represented.

Mathematically, any basis for the space of mxn gray-scale images must contain exactly mn images—the number of pixels in an mxn image. Consequently, the transform of an mxn image will have mn weights. The weights can be conveniently arranged into an mxn array called the transformed image even though it isn't a true image at all.

The transformation process, in itself, is certainly not a compression technique (since the transformed image is the same size as the original), but it can lead to one. Suppose the basis images can be chosen so that, for a wide class of images, many of the weights turn out to be small: for a given image, set these small weights to zero and use the resulting array of modified weights to represent it. Since the transform of the image has been modified, it can be used only to approximate the original. How good is the approximation? That depends on how good the scheme is for throwing out nonzero weights, that is, on the appropriateness of the basis elements and the number of weights which can be discarded. JPEG and wavelet methods both employ this type of process and offer significant compression benefits, often with minimal impact on the quality of the reproduction. They differ in the choice of basis images, i.e., in the transform used, and subsequently in the method used to discard small weights. However, both share the idea of picking a basis that can efficiently represent an image, often using only a small number of its basic images.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState