LIMP: Large Image Manipulation Project
What LIMP does now is only the tip of the iceberg. The basic foundation has been laid, but many algorithms remain to be written. Every area of LIMP will most likely receive extensions as work progresses. Of course, the people who contribute to LIMP and OSRS are the people who will drive the development. I will outline some of the planned work to give ideas of the kinds of things I believe would be useful. This is not an all-inclusive list, as there are likely many interesting features which have yet to occur to me.
Support for geographic information needs to be added at some level to allow applications to learn how images are related to the real world and each other. This is a basic requirement for high-level GIS programs. It may not be necessary to put this information directly into LIMP, but the metadata information in an image provides a potentially convenient way of storing and retrieving such information.
The classes from LIMP's image viewer will be extended to handle multiple overlapping images as well as vector data. This is also somewhat detached from the core of LIMP, because it would create a new display pipeline. The image display class already consists of a sophisticated drawing class, which is capable of ordering and computing tiles for the display with minimal impact on the GUI. This impact can be reduced to almost nothing once Qt supports threaded event handling.
A wide variety of radiometric image adjustments will be useful. These will start as simple histogram stretches for viewing, and progress to more complex color and intensity modifications. This type of modification should add very little extra overhead to LIMP, as it was designed specifically to minimize the procedural and computational overhead of such objects.
For a more complete list of expected modifications to LIMP, see the TODO file in the LIMP distribution. Similarly, if you are interested in the progress, see the NEWS and ChangeLog files.
As with most libraries that cater to processing extremes, LIMP is not destined for a mass-market audience. Good solutions for dealing with similar image-processing demands, such as editing, already exist. The problems facing a designer of a general editing program, where every pixel may be changed interactively, are not driving our choices. Instead, LIMP is designed to deal well with scientific image-processing needs. In this category, I am most familiar with aerial and satellite image processing, but I imagine other fields have similar needs.
Images for the GIS market typically cover a large area with a relatively low image scale. One obvious potential parallel in another field would be small area images with higher image scales—as might be found in microscopy work. As everyone who has played with a fractal generator knows, if you zoom into an object far enough, the original object seen at that scale covers an incredibly immense area.
Aerial and satellite images have come to be processed and stored on computers only in recent history. As computers become more powerful and capable of accessing larger amounts of data, new attempts will undoubtedly be made to process and understand exponentially larger sets of data at even finer resolutions. LIMP is not expected to be a final answer to these problems, but is just an experiment in dealing with them while maintaining performance, ease of use and the sanity of its programmers.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Build a Skype Server for Your Home Phone System
- A Topic for Discussion - Open Source Feature-Richness?
- Tech Tip: Really Simple HTTP Server with Python
- Why Python?
- Not free anymore
2 hours 6 min ago
5 hours 53 min ago
- Reply to comment | Linux Journal
6 hours 1 min ago
- Understanding the Linux Kernel
8 hours 16 min ago
10 hours 46 min ago
- Kernel Problem
20 hours 49 min ago
- BASH script to log IPs on public web server
1 day 1 hour ago
1 day 4 hours ago
- Reply to comment | Linux Journal
1 day 5 hours ago
- All the articles you talked
1 day 7 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?