New Projects - Fresh from the Labs
I realise I tend to cover wacky things like molecule imaging, telekinesis and 3-D knitting software, but this is something that actually may be of genuine industrial use in everyday life. libdmtx is an open-source project dedicated to providing tools for reading and writing 2-D Data Matrix barcodes. The Data Matrix standard (en.wikipedia.org/wiki/Data_Matrix) is gaining widespread popularity due to its impressive features, but it may be of particular interest to the FOSS community because it's unencumbered by patents and royalty-free (thus, free to use and distribute). Also, the existing proprietary solutions can be quite expensive, and libdmtx now has reached a point where it realistically can save some users six-digit savings every year.
Data Matrix barcodes have been around since the 1980s, but for years, they were used only to mark electronic components. More recently, they have been adopted by a wide variety of industries in the US and Europe, and they are becoming especially popular with mobile phone developers due to their affinity to work with small digital cameras. Most US readers instantly will recognize Data Matrix barcodes, as they appear on most first-class mail delivered by the US Postal Service. Curious readers can snap a photo of their mail with a camera or Webcam and scan it with libdmtx without purchasing any special hardware (it also works well with faxed and scanned images).
Installing libdmtx is fairly straightforward with either a Debian package available under the name of libdmtx-utils or a source tarball. For those installing via source, compiling is basically the standard affair of:
$ ./configure $ make
And, as root or sudo:
# make install
However, the configure script did come up with a dependency you probably won't have installed by default, GraphicsMagick. GraphicsMagick is in many distro repositories though, and to get past the configure script, I had to install libgraphicsmagick1 and libgraphicsmagick1-dev from the Ubuntu archive.
Once you have libdmtx compiled, before you can run the program, you probably will need to run the following command (as root or sudo):
I cover only very basic usage in reading barcodes for now, but libdmtx also will write barcodes along with a bunch of other features that make it worth checking the man pages. First, grab an image to test. If you have a photo of a barcode around, great stuff, use that. Otherwise, some test images are available from the source tarball under the folder test/images_opengl, which cover a variety of different situations and tricky tests on libdmtx's abilities. Once you're ready to go, use the following command:
$ dmtxread nameofimage.png
And, that's pretty much all you need to do. dmtxread will spend a few seconds analyzing the image you've given it, and if it finds a matrix barcode, it then outputs the contained text to the terminal. Check the screenshot for some of the hidden messages and real-world codes that you can contain within a barcode.
What really intrigued me about this project is that you can recover barcode data from old pictures that never would have been meant for the purpose originally. And, the James Bond in me gets a kick out of knowing you can hide a message in a barcode in a seemingly unrelated picture as a covert method of communication—neat! Although this has just a command-line utility for now, it's really only a basic program on top of a very clever and versatile library. This project is begging for a GUI front end, at which point, it could make some serious inroads and savings in the real industrial world.
John Knight is the New Projects columnist for Linux Journal.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Build a Skype Server for Your Home Phone System
- Why Python?
- A Topic for Discussion - Open Source Feature-Richness?
- Tech Tip: Really Simple HTTP Server with Python
- Not free anymore
28 min 8 sec ago
4 hours 15 min ago
- Reply to comment | Linux Journal
4 hours 23 min ago
- Understanding the Linux Kernel
6 hours 38 min ago
9 hours 7 min ago
- Kernel Problem
19 hours 10 min ago
- BASH script to log IPs on public web server
23 hours 37 min ago
1 day 3 hours ago
- Reply to comment | Linux Journal
1 day 3 hours ago
- All the articles you talked
1 day 6 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?