LIMP: Large Image Manipulation Project

Designing a new library for processing large images using a minimal amount of memory.

Library design is an imprecise art. It would be impractical to include in a design all possible uses for a library of any sufficient size or complexity. Thus, it is inevitable that many libraries (and programs) reach evolutionary dead ends, where newly anticipated uses or algorithms can no longer fit nicely into the existing architecture.

The quickest short term solution is usually to hack in new interfaces, but over time the accumulation of hacks tends to reduce the stability, understandability and maintainability of code. The software industry comes up against this problem often, but fortunately has found a simple yet elegant solution—start over. Usually, rewriting lower- to middle-level interfaces can remove the accumulation of hacks. However, if the design criteria in question are incorporated throughout the code, sometimes only a total rewrite can truly help.

This is not solely a trait of software design. Software engineering is an expression of mathematics in a confined space (your computer). Through this heritage, it shares traits with other inexact sciences such as physics, where rewriting or reworking theories is not uncommon.

Starting Over

During the last five years, I've written many image-processing algorithms, from specialized routines for machine vision to complete libraries for commercial video and aerial image-processing software. The last commercial library has been in use for three years and has weathered many interface changes and enhancements. But just as each library was in some ways an improvement over previous attempts, I saw ways to improve performance and capability. Following in the footsteps of the makers of the six million dollar man—I wanted to make it faster, smarter and better than before and at significantly reduced cost.

My commercial library had a large amount of code tied to it, so simply modifying the existing code was not an option. It seemed as if I would never be able to incorporate a new library into my commercial work because of enormous design incompatibilities. Rather than have this new library be destined to collect electronic dust on my hard drive, I decided to start completely from scratch as open source. In late November 1998, the Large Image Manipulation Program (LIMP) was born.

It's likely that even as open source, this library would have been inconspicuous enough to draw little, if any, attention. However, after a few months spent developing LIMP in my spare time, Open Source Remote Sensing (OSRS, http://remotesensing.org/) was born. I was thrilled at the thought of having an open source library that was actually useful to someone, so LIMP was moved to OSRS for public development.

Speed, Ease of Use and Memory

The purpose of LIMP is to allow the processing of large images using a minimal amount of memory. A number of available libraries can be used for image processing, any of which could be used to give identical results. The differences between these libraries can often be summed up by answering a few questions:

  • Can the data be processed on demand, or must all the processed data be in memory or on disk?

  • How easy is it to write new algorithms for the library?

  • How efficient is it?

The simplest image processing library would first allow loading an image into memory, then provide pixel-level access to the data (read and write), and finally allow storing the data back to disk. Advantages of such a scheme include a simple set of interfaces and (given enough memory or small enough images) nearly optimal computational efficiency. Disadvantages include high memory usage and therefore poor scaling with image size or number of images.

If memory usage is not a problem, this would be the optimal way of dealing with images. Unfortunately, memory cannot yet be considered infinite for many image-processing demands. As an example, the very first test of my last commercial library was to load 1200 images consisting of 300 gigabytes of data and display them at once. Actual processing was done in blocks of images to avoid having a failure wipe out weeks of processing time. In an attempt to avoid repeating historical blunders, I would never say that no one will ever have several hundred gigabytes of memory. I imagine when that time comes, people will work with even larger data sets.

In handling large images, many proprietary libraries reduce memory usage by sacrificing ease of use and efficiency. Great lengths can be taken to reduce the loss, but all such libraries are at a slight disadvantage in these areas and LIMP is no exception.

By learning from past experiences, many interfaces in LIMP have been designed to promote speed-enhancing optimizations as well as to group complex code into a few internal locations, where they can be more easily maintained. Because complex code is grouped into reusable templates, many types of functions and conversions can be written without having to deal with any complexity that would normally be encountered in large image processing.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I have search over the

Anonymous's picture

I have search over the sikiş web about LIMP but I could n't find any article or technique for handling Large Images. Could you send me any reliable link to do that...

Need LIMP Source Code and Article

honey's picture

Sir,
I have search over the web about LIMP but I could n't find any article or technique for handling Large Images. Could you send me any reliable link to do that...

Thanks

Need LIMP Source Code and Article

honey's picture

Sir,
I have search over the web about LIMP but I could n't find any article or technique for handling Large Images.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState