Graphics Formats for Linux
Having looked at some of the common graphics formats, we finish up with a quick look at some available programs and libraries for converting images.
xv and ImageMagick
These two X programs display, convert, and manipulate images. xv can read and write JPEG, GIF, TIFF, PBM/PGM/PPM, and X formats, although it cannot display 24bit images as 24bit. In addition to format conversions, xv can perform simple image manipulations, e.g., rotate, flip, crop, magnify, and gamma correction. ImageMagick supports the formats xv does, plus TGA. It can send output directly to a postscript printer and has more image manipulation capabilities than xv: cut, copy, paste, resize, flip, flop, rotate, invert, emboss, and more, on the whole or on part of the image.
The canonical way to view and convert from PostScript (including EPS) is ghostscript. Ghostscript operates both from the command line options and interactively. Input is always a PostScript file and output may be any of several different file formats, printer languages, or screen types. By default, ghostscript sends its output to an X terminal, but it can also save images to file. Ghostview is a popular front end for ghostscript that improves the screen handling.
The PBMPLUS utilities are a set of 120 programs for image conversion and manipulation. PBMPLUS defines three intermediate formats: PBM, PGM, and PPM (see listing above). The basic philosophy is that if there are twenty formats, you need twenty squared, or 400, programs/subroutines to convert between them. The use of intermediate formats reduces that number to two times twenty, or forty. In addition to the conversion programs, there are a number of simple image processing programs: scaling, rotating, smoothing, convolution, gamma, cropping and more. NETPLUS is a newer version of PBMPLUS, but some versions suffer from serious bugs.
C Library support
Substantial support exists for C programmers working with graphics. Libraries for JPEG/JFIF, TIFF, and PNG are now available. In addition, code fragments for many of the other formats are readily available from Internet archive sites.
IJG JPEG/JFIF library
The easiest way to do JPEG programming is to use the Independent JPEG Group's (IJG) library. This library is based on the JFIF specification and includes two common programs found on many Unix systems for storage and retrieval of JFIF images: cjpeg and djpeg. The library is available as source and compiled code.
SGI TIFF library
Like the JPEG library, a full set of routines to implement TIFF is available. Written by Sam Leffler and SGI, this library is also available as source and compiled code. The library includes programs for converting, dithering, splitting, and displaying information on TIFF files.
With the recent announcement of support for the PNG format, Compuserve also announced a PNG toolbox. The toolbox uses the zlib library for the LZ77 code and is intended to speed the acceptance of the new format. Its use will be free of royalties. A beta version is available on Compuserve.
There are several archive sites on the Internet where programs and further information can be found:
anonymous ftp sites:ftp://sgi.com/graphics/tiff TIFF version 6.0 specification and the source for the SGI TIFF libraryftp://sunsite.unc.edu/pub/Linux/apps/graphics/convert PBMPLUSftp://sunsite.unc.edu/pub/Linux/X11/xapps/graphics ImageMagickftp://sunsite.unc.edu/pub/Linux/libs/graphics Compiled version of JPEG and TIFF libraries for Linuxftp://sunsite.unc.edu/pub/Linux/apps/graphics/viewers Compiled version of ghostscript for Linuxftp://ftp.wuarchive.edu/ Numerous code fragments for image formats are scattered throughout this large archivecompuserve.com GRAPHSUPPORT forum, library 20, LP071.zip, the beta version of the PNG toolboxwww.uwm.edu/~ggraef. List of links and other direct links to format information
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- Designing Electronics with Linux
- What's the tweeting protocol?
- Kernel Problem
2 hours 8 min ago
- BASH script to log IPs on public web server
6 hours 35 min ago
10 hours 11 min ago
- Reply to comment | Linux Journal
10 hours 43 min ago
- All the articles you talked
13 hours 7 min ago
- All the articles you talked
13 hours 10 min ago
- All the articles you talked
13 hours 11 min ago
17 hours 36 min ago
- Keeping track of IP address
19 hours 27 min ago
- Roll your own dynamic dns
1 day 40 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?