Tutorial: Translating Scanned Docs
Recently, I had to access some information from a German document, but the problem was that it was only available as a poor quality scan. This is an overview of how I extracted and translated the information. The tools used were pdfimages, GIMP, gImageReader and Google Translate.
There are some OCR (optical character recognition) tools that can directly handle PDF files as an input file format. Unfortunately, in this case, the scanned pages were badly skewed and needed to by tidied up by hand before processing. It would be possible, although tedious, to screen capture each and every page, but the screen resolution and the resolution of the original scanned images wouldn't match which would result in a loss of quality.
Extract the images (pdfimages)
The solution is to extract the images from the PDF file. I used a tool called pdfimages for this.
pdfimages inputfile.pdf outputfile
will produce a series of graphic files which are numbered according to the order in which they occurred in the PDF document. By default, they are in PBM format. This is a less common format, and pdfimages can be coaxed into outputting JPEG files instead. However, I would advise against that as JPEG is a lossy format and we need to preserve as much quality as possible for documents that are going to be OCRed.
Clean up images (GIMP)
I used GIMP to clean up the images, and fortunately, it can work with PBM as an input format. The scans themselves had a number of problems. The first thing I did was to use the rotate tool (Layer>Transform>Arbitrary Rotate...) to straighten up the image. To the make this easier, I zoomed in so that I could use the top of the windows as ruler against a line of text. In this case I found that a +1.4 deg rotation made the lines straight again.
The original image that I had to work with.
The cleaned up version.
The scans were also skewed to an extent. This meant that although the lines of text were now horizontally straight, the the left margin was not vertically aligned. I used the GIMP skew tool to correct this, again working with a zoomed image.
The image was also crushed, so, I scaled it to add 50% to its height (Layer>Scale layer...). Through experimentation, I discovered that this, along with making the image mono (Image>Mode>Indexed...), greatly improved the accuracy of the OCR software.
Finally, I cropped the image.
The images were now ready to put through gImageReader, a GTK front end to the Tesseract OCR tool. By default, although it had the resources to perform OCR on German documents, it didn't have the German dictionary it needed to spell check the output. I rectified this by adding the German MySpell dictionary using the package manager. By the way, gImageReader can handle PDF documents as an input format, if the page images are of a suitably good quality.
After the image has been loaded in and processed, the window is split between the input document and the output text. The output text pane has an real-time spell check and a few rudimentary text editing faculties. As you load in the pages of a multi-page document you can keep adding the output to the text pane. Obviously, as the source document was so poor to begin with, the output contained a few errors. I made some corrections by hand, such as manually removing hyphens. The real time spell checker that allows you to choose corrections with a context menu along with visual references back to the original document were helpful here.
Translate into English (Google Translate)
The final stage was to cut and paste the text into Google Translate.
The end result was good enough for me to extract the information that I needed. Here's an example of its output:
The use of free software also has a political dimension. The freedom of the software was on the 3rd UN World Summit on lnfonnationsgesellschaft (WSIS) recognized as worthy of protection. It belongs to the elementary demands of civil society with the "digital divide" is to be overcome. The application and further development of free software is free of barriers such as Soitware patents, restrictive licensing conditions and high cost. This reflects free software free decision-making powers again and wins an additional strategic importance for research, innovation and growth.
UK based freelance writer Michael Reed writes about technology, retro computing, geek culture and gender politics.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Ubuntu Online Summit
- Devuan Beta Release
- The Qt Company's Qt Start-Up
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- The US Government and Open-Source Software
- May 2016 Issue of Linux Journal
- The Death of RoboVM
- Open-Source Project Secretly Funded by CIA
- New Container Image Standard Promises More Portable Apps
- BitTorrent Inc.'s Sync
In modern computer systems, privacy and security are mandatory. However, connections from the outside over public networks automatically imply risks. One easily available solution to avoid eavesdroppers’ attempts is SSH. But, its wide adoption during the past 21 years has made it a target for attackers, so hardening your system properly is a must.
Additionally, in highly regulated markets, you must comply with specific operational requirements, proving that you conform to standards and even that you have included new mandatory authentication methods, such as two-factor authentication. In this ebook, I discuss SSH and how to configure and manage it to guarantee that your network is safe, your data is secure and that you comply with relevant regulations.Get the Guide