Image Processing with QccPack and Python
Limited bandwidth and storage space are always a challenge. Data compression is often the best solution. When it comes to image processing, compression techniques are divided into two types: lossless and lossy data compression.
QccPack, developed by James Fowler, is an open-source collection of library routines and utility programs for quantization and reliable implementation of common compression techniques.
Libraries written for QccPack have a clean interface. So far, these libraries can be upgraded without having to modify the application code. QccPack consists of a static-linked library, libQccPack.a, and supports dynamic linking with libQccPack.so.
Entropy coding, wavelet transforms, wavelet-based sub-band coding, error coding, image processing and implementations of general routines can be done through the library routines available with QccPack. Optional modules are available for the QccPack library that you can add later. QccPackSPIHT is one optional module for the QccPack library that provides an implementation of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm for image compression. The QccPackSPIHT module includes two utility executables, spihtencode and spihtdecode, to perform SPIHT encoding and decoding for grayscale images.
QccPack and QccPackSPIHT are available for download from the QccPack Web page on SourceForge. Red Hat users can find source and binary RPMs at that Web site. Users of other systems will need to compile the source code. QccPack has been complied successfully on Solaris/SPARC, Irix, HP-UX, Digital UNIX Alpha and Digital RISC/Ultrix.
You can use QccPack to train a VQ codebook on an image and then to code the image with full-search VQ followed with arithmetic coding. Take a 512*512 grayscale Lenna image, for example. The following sample procedure assumes you are at the Python interpreter prompt.
Step 1: convert from the PGM image file format to the DAT format file by extracting four-dimensional (2x2) vectors of pixels:
>>> imgtodat-ts 4 lenna.pgm.gz lenna.4D.dat.gz
Step 2: train a 256-codeword VQ codebook on the DAT file with GLA (stopping threshold = 0.01):
>>> gla -s 256 -t 0.01 lenna.4D.dat.gz lenna.4D256.cbk
Step 3: vector quantize the DAT file to produce a channel of VQ indices:
>>> vqencode lenna.4D.dat.gz lenna.4D256.cbk lenna.vq.4D256.chn
Step 4: calculate first-order entropy of VQ indices (as bits/pixel):
>>> chnentropy -d 4 lenna.vq.4D256.chn First-order entropy ↪of channel lenna.vq.4D256.chn is: 1.852505 (bits/symbol)
Step 5: arithmetic-encode channel of VQ indices:
>>> chnarithmeticencode -d 4 lenna.vq.4D256.chn ↪lenna.vq.4D256.chn.ac
Channel lenna.vq.4D256.chn arithmetic coded to: 1.830322 (bits/symbol):
>>> rm lenna.vq.4D256.chn
Step 6: decode arithmetic-coded channel:
>>> chnarithmeticdecode lenna.vq.4D256.chn.ac lenna.vq.4D256.chn
Step 7: inverse VQ channel to produce quantized data:
>>> vqdecode lenna.vq.4D256.chn lenna.4D256.cbk ↪lenna.vq.4D256.dat.gz
Step 8: convert from DAT to PGM format:
>>> dattoimg 512 512 lenna.vq.4D256.dat.gz lenna.vq.4D256.pgm
Step 9: calculate distortion between original and coded images:
>>> imgdist lenna.pgm.gz lenna.vq.4D2 56.pgm
The distortion between files lenna.pgm.gz and lenna.vq.4D256.pgm is:
22.186606 dB (SNR)
36.719100 dB (PSNR)
The Python Imaging Library adds image processing capabilities to the Python interpreter. This library provides extensive file format support, an efficient internal representation and fairly powerful image processing capabilities. The core image library is designed for fast access to data stored in a few basic pixel formats. The library contains some basic image processing functionality, including point operations, filtering with a set of built-in convolution kernels and color space conversions. The Python Imaging Library is ideal for image archival and batch processing applications. You can use the library to create thumbnails, convert between file formats and print images. The library also supports image resizing, rotation and arbitrary affine transforms.
The Python Imaging Library uses a plugin model that allows you to add your own decoders to the library, without any changes to the library itself. These plugins have names such as XxxImagePlugin.py, where Xxx is a unique format name (usually an abbreviation).
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
9 min 54 sec ago
- Reply to comment | Linux Journal
4 hours 9 min ago
- Yeah, user namespaces are
5 hours 25 min ago
- Cari Uang
8 hours 57 min ago
- user namespaces
11 hours 50 min ago
12 hours 16 min ago
- One advantage with VMs
14 hours 45 min ago
- about info
15 hours 18 min ago
15 hours 19 min ago
15 hours 20 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?