Tesseract: an Open-Source Optical Character Recognition Engine

Tesseract is a quirky command-line tool that does an outstanding job.

I play with open-source OCR (Optical Character Recognition) packages periodically. My last foray was a few years ago when I bought a tablet PC and wanted to scan in some of my course books so I could carry just one thing to school. I tried every package I could find, and none of them worked well enough even to consider using. I ended up using the commercial version of Adobe Acrobat, which allows you to use the scanned page as the visual (preserving things like equations in math books), but it applies OCR to the text so you can search. It ended up being quite handy, and I was a little sad that I was incapable of getting any kind of result with open-source offerings.

Admittedly, the problem is very hard. Font variations, image noise and alignment problems make it extremely difficult to design an algorithm that can translate the image of text into actual text reliably.

Recently, I was looking again and found a project called Tesseract. Tesseract is the product of HP research efforts that occurred in the late 1980s and early 1990s. HP and UNLV placed it on SourceForge in 2005, and it is in the process of migrating to Google Code (see Resources).

It currently is lacking features, such as layout recognition and multicolumn support; however, the most difficult part, the actual character recognition, is superb.

How to Install

Version 1.03 was the latest version at the time of this writing, and the build and install process still needed a little work. Also, integration with libtiff (which would allow you to use compressed TIFF as input) was configured by default, but it was not working properly. You might try configuring it with libtiff, as that would allow compressed TIFF image input:

# ./configure

If you later find that it doesn't recognize text, reconfigure it without libtiff:

# ./configure --without-libtiff

The build is done as expected:

# make

Configure for version 1.03 also indicated that make install was broken. I managed to figure out the basics of installation by trial and error.

First, copy the executable from ccmain/tesseract to a directory on your path (for example, /usr/local/bin):

# cp ccmain/tesseract /usr/local/bin

Then, copy the tessdata directory and all of its contents to the same place as the executable (for example, /usr/local/bin/tessdata/...):

# cp -r tessdata /usr/local/bin/tessdata

Finally, make sure your shell PATH includes the former (/usr/local/bin).

How to Use

First, you need access to a scanner or scanned pages. Sane is available with most Linux distributions and has a nice GUI interface called xsane. (I discuss more on scanning near the end of this article.)

Tesseract has no layout analysis, so it cannot detect multicolumn formats or figures. Also, the broken libtiff support means it can read only uncompressed TIFF. This means you must do a little work on your scanned document to get the best results. Fortunately, the steps are very simple; the most common ones can be automated, and the results are well worth it.

This is what you need to do:

  1. Use a threshold function to drop lighting variations and convert the image to black and white.

  2. Erase any figures or graphics (optional, but if you skip this step the recognizer will give a bunch of garbled text in those areas).

  3. Break any multicolumn text into smaller, single-column images.

I recommend using a graphics program, such as The GIMP, to get a feel for what needs to be done. The most important step is the first one, as it drastically will improve the accuracy of the OCR.

The GIMP has a great function that easily can remove lighting variations in all but the worst cases.

First, go to the Image→Mode menu and make sure the image is in RGB or Grayscale mode. Thresholding will not work on indexed images. Next, select the menu Tools→Color Tools→Threshold. This tool allows you to drop pixels that are lighter than a specified cutoff value, and it converts all others to black. A pop-up (Figure 1) lets you select the threshold. Make sure image preview is turned on in order to get an idea of how it affects the image. Slide the threshold thumb left and right to choose the cutoff between white and black. You may not be able to get rid of all of the uneven lighting without corrupting the text. Find a good-looking result for the text, then erase the rest of the noise with a paint tool. The transition from the first part to the second part in Figure 2 shows a typical result of this step.

Figure 1. Threshold dialog in The GIMP. Slide the triangle left and right to choose what pixels should be white and what pixels should be black.

You should experiment and zoom in over a portion of the image while you play with thresholding, so you can see things closer to the pixel level. This lets you see more of what Tesseract will see and gives you a better feeling for how to get the best results. If you can't recognize the characters, Tesseract surely won't.

This page had handwritten notes, underlining and a section of lighting that threshold could not get rid of without compromising the rest of the image. Use a brush to paint over any easy-to-fix areas. I would not recommend spending much time on cases where the extraneous information (figure, noise and so on) has some distance from the text; Tesseract might insert a few garbled characters, but those are usually quicker to fix in a text editor. The resulting image should look something like the third part of Figure 2.

Figure 2. Zoomed view of image preparation, from left to right: the original scanned image, the image after applying threshold, and the image after applying threshold and some manual cleanup.

Now, switch the image to indexed mode (using the menu selection Image→Mode→Indexed), and choose black and white (one-bit palette). Also, make sure dithering is off. Save the image as an uncompressed TIFF image, and you are ready to do recognition.

The recognition part is easy:

$ tesseract image.tif result

The third argument is the base name of the output file. Tesseract adds a txt extension automatically, so in this example, the recognized text would be in result.txt.

The underlining in this example ended up significantly affecting the OCR. A few of the lines were recognized moderately well, but two of them were completely unintelligible after processing. This underscores the importance of using a clean source if possible. Manually removing the underlining drastically improved recognition, but it took more time than simply entering the text manually.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The online application Free

Anonymous's picture

The online application Free OCR allows transforming the contents of an image file in a text output format. Though Microsoft Word is not supported currently.

OCR softwares are everywhere

Anonymous's picture

OCR softwares are everywhere nowadays. I prefer online ones they don't need installation and most of them are free, like this one: Free OCR.

Lengthy install?

Anonymousey's picture

How to install? Takes about 3 seconds...
apt-get install tesseract-ocr

Tesseract works really well!

I've been using Tesseract

caballosweb's picture

I've been using Tesseract OCR with a C# program I have made to batch OCR hundreds of documents a server we run. Considering it costs nothing, I am very impressed with the accuracy. It is far superior to GOCR which need the image to have the grey scale adjusted before anything can be done.

Submited by : Bajar Libros

Missing links to the images used in the test

Anonymous's picture

I can't find the links to the images used in the test. It seems very strange to me that Ocrad was unable to recognize even a single character on some of them.

What version of Ocrad did you use? Where the characters at least 20 pixels high as requires the manual of Ocrad? If they were smaller, did you use the "--scale" option of Ocrad? Did you even RTFM?

If you want a good review of free OCR software better see this one for example.


Anonymous's picture

Why not try gscan2pdf, which has support for tesseract?

The OCR data is embedded into the pdf as an annotation. It can be indexed with beagle, for example, and viewed with Adobe's pdf reader. Support for annotations is coming to the free pdf readers as well, I believe.

gscan2pdf also supports unpaper, and I find it an excellent all-around tool for my scanner.


MacPac's picture

would be great if someone could code a program that could take my isight or any usb cam to directly send image to the above mentioned program and convert it into text format, so all i would have to do is hold my text book up against my webcam and get it on my computer. Cheers!

International characters?

Cesar's picture

I've using tesseract for a while and it works great, but it has a major flaw that I haven't been able to overcome, I can't make it recognize international characters (i.e. á,é,í,ó,ú,ñ,Ñ), for example, it changes ó for 6.

Is there a way to make it support other than standard ASCII characters?


open source rules, seriously.

Live tv's picture

Great work, it seems like a lengthy installation however.

Lengthy Installation

Mike's picture

You can OCR documents for free using Tesseract at A Billion Billion - Free OCR for Everyone by just uploading your TIFF files and clicking OCR. No installation this way.

Dead Link

Phil Cooper's picture

The site at abillionbillion.com no longer exists. There is another site, free-ocr.com, that allows one to upload scanned images in a variety of graphics file formats, but it is limited to a maximum of 10 images per hour and there's an upper size limit on the images. The site is supported by ads and donations.

I'm impressed!

pcountry's picture

Wow - it did a really good job, first try out of the box. I built it without turning off TIFF support, and used tesseract-1.04b. I put it in /usr/local/bin. At first I got this:

Error: Unable to open unicharset!

This was fixed by doing this:

$ sudo ln -s /usr/local/bin/tessdata /usr/local/share/tessdata

Then it complained about not recognizing the file format (which was TIFF with no compression). I renamed the file from "text2.tiff" to "text2.tif" and then it was happy. That's just silly, if you ask me.

This is all on Ubuntu Feisty. My original scan was 300 dpi, and I ran it through the GIMP the same as in the article.

Very nice!


One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix