LIMP: Large Image Manipulation Project
LIMP is almost pure C++, as is most of the code I've written in the last several years. My feeling is the compiler should do as much of the work as possible, allowing the programmer to focus on higher-level concepts. C++ is certainly not the holy grail of programming, but I feel that using an object-oriented language is much simpler then manually approximating an object-oriented interface in a non-object-oriented language. After all, everyone knows all computer languages can be reduced to assembly language; it is just a matter of how much work you have to do vs. how much the compiler does for you.
The Qt library (www.troll.no) is used as both a widget set and a template library. Since all of the core library is independent of the display subsystem, a library such as STL could have been used, but Qt is better documented with no inconsistencies between platforms. I also intended to write graphical interfaces using the core libraries, so Qt was a natural choice.
Plug-in types are used for a number of interfaces. This makes it easy to add new implementations without changing existing code. Image loading, saving and serialization are accomplished using plug-in interfaces. Simple interpolation filters are also implemented this way. The plug-in manager is very generic and can handle plug-ins of any type. This also makes it possible to add run-time loading of external plug-ins, although this feature is not yet implemented.
An image in LIMP is nothing more than a class for caching and moving data. By itself, an image does not produce, or in any way modify, the data. All production and processing steps are performed in “layers”. An image can contain any number of layers, but the minimum for useful work is one—the source layer. This layer normally corresponds to some type of file loader (e.g., tiff), but can also be a simpler type such as an in-memory buffer, a constant-value image, or anything that can produce an image from scratch. In order to deal with large images efficiently, all data should be produced or loaded on-demand.
Other layers could perform such functions as data format conversion, radiometric or geometric transforms, and mosaics by combining multiple images. A number of data-type conversions are predefined (e.g., from RGB to YCbCr or RGB to gray-scale), and many other conversions can be easily defined. When processing layers are added to an image, everything about the image can be affected. The 2-D properties (width/height) can change, or the depth (samples per pixel, pixel data type, etc.) could be modified. For example, one class that modifies 2-D size is the zoom layer, which produces a new virtual image that is a magnification of an existing one. This can be used not only for zooming in on a visual image, but for up-sampling nearly any supported type.
By design, as much processing as possible is moved into an assembly-line approach. The basic unit for loading and processing data is a tile. All requests for data are made to an Image class, where it is broken up into tiles for processing. By determining which tiles are needed ahead of time, additional optimizations can be performed (for example, reordering the tile requests to optimize data cache hits). Each tile is processed and the ones not in the cache are created. All the complexity of chaining together layers of various types is dealt with by the Image class, which simplifies layer construction. When the time comes for a layer to process a given tile, it is presented with the input data space already filled and the output data space already allocated; therefore, it only has to process the pixels.
Linux is the primary development platform; however, efforts are being made to keep the library portable to other platforms. The GNU auto-configure tools are used to test for required system characteristics. LIMP is known to build on Red Hat/ix86 5.2, Irix/Mips 6.3, and Red Hat/Alpha 5.2—each using egcs 1.1.2.
Support exists for tiff images in a variety of formats; scan-line tiff, as well as tiled tiff, are supported. Color and gray-scale capability is known to work, but the tiff layer also supports a number of other formats including shorts, floats and doubles, as well as multi-channel types. Recent work by others, notably Frank Warmerdam, has resulted in support for other image formats by bridging to his open source GDAL library.
Many optimizations that have been designed for LIMP are not yet fully realized. This is not to say that LIMP is slow in its current state, but room for improvement exists within design specs. Already, some optimizations have been added which could be difficult to add in other architectures. For example, LIMP implements data-request optimizations that reorder requests to make optimal use of the image cache. Most of the optimizations are done internally, so the calling objects benefit from the optimizations without adding any extra complexity.
Today, LIMP is meant primarily for developers. The only thing usable for non-programmers is its image viewing program (imgview). imgview is a simple display tool, capable of viewing very large images quickly and without needing much memory. It takes user interface ideas from other image-processing tools to allow options such as dragging the canvas while updating the display in the background. It also supports a number of zoom filters for smoothing upsampled (i.e., enlarged) image data.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?