Extract and Parse ODF Files with Python
Listing 2. List the Files in the ODT Archive
import sys, zipfile myfile = zipfile.ZipFile(sys.argv) listoffiles = myfile.infolist() for s in listoffiles: print s.orig_filename
The import statement allows you to use the sys module for getting a command-line argument of the file, and the zipfile module loads in the functionality for reading and unzipping files. As you saw from the Python shell, the infolist() method on the zipfile archive lists the files in it. So iterating over the items from the infolist() and then calling an orig_filename member function gives you a list of all files in the archive.
For more detailed information, try something like this:
print s.orig_filename, s.date_time, s.filename, ↪s.file_size, s.compress_size
You will receive more information about the file, quite similar to this:
mimetype (2006, 9, 9, 7, 50, 10) mimetype 39 39 Configurations2/statusbar/ (2006, 9, 9, 7, 50, 10) Configurations2/statusbar/ 0 0 Configurations2/accelerator/current.xml ↪(2006, 9, 9, 7, 50, 10) Configurations2/accelerator/current.xml 0 2 Configurations2/floater/ (2006, 9, 9, 7, 50, 10) Configurations2/floater/ 0 0 ...SNIPPED FOR BREVITY...
A typical ODF text file (with the .odt extension) will have some of the following files when unzipped. Here's the output:
mimetype Configurations2/statusbar/ Configurations2/accelerator/current.xml Configurations2/floater/ Configurations2/popupmenu/ Configurations2/progressbar/ Configurations2/menubar/ Configurations2/toolbar/ Configurations2/images/Bitmaps/ layout-cache content.xml styles.xml meta.xml Thumbnails/thumbnail.png settings.xml META-INF/manifest.xml
The most important file in the archive is the content.xml file, because it contains the data for the document itself. I discuss this file here; however, for detailed information on each tag and so on, take a look at the specification in the 2,000+-page PDF file from the OASIS Web site (see Resources).
Basically, the content.xml file looks like a DHTML file with tags for all the contents. The tag I was concerned with most for my search operation was the <text:p> tag and its closing tag </text:p>, which wraps paragraphs in a document. I'll show you how to get this tag from a content file later in this article.
Other files of interest in the archive are the META-INF/manifest.xml, mimetype, meta.xml and styles.xml. Other files simply contain data for configurations for the word processors reading and displaying the contents file.
The manifest is simply an XML file with a listing of all the files in the zipped archive. The mimetype file is a single line containing the mimetype of the content file. The meta.xml contains information about the author, creation date and so on. The styles file contains all the formatting styles for displaying the file.
You can extract any of these files from the ODF file with the read() method on the zip object to get it as a very long string. Once read, you can modify, view and write the whole string to disk as an independent file. Listing 3 shows how to extract the manifest.xml file.
Listing 3. Extracting Files for the ODT Archive
import sys, zipfile if len(sys.argv) < 2: print "Usage: extract odf-filename outputfilename sys.exit(0) myfile = zipfile.ZipFile(sys.argv) listoffiles = myfile.infolist() for s in listoffiles: if s.orig_filename == 'META-INF/manifest.xml': fd = open(sys.argv,'w') bh = myfile.read(s.orig_filename) fd.write(bh) fd.close()
For more than one file, you can use a list instead of the if clause:
if s.orig_filename in ['content.xml', 'styles.xml']:
This way, you can extract whatever files you need to look at simply by reading in their contents and either manipulating them or writing them off to disk.
The contents of an XML file are best suited for manipulation as a tree structure. Use the XML parsing capabilities in Python to get a tree of all the nodes within an XML file. Once you have the tree in a content file, you easily can get to the <text:p> nodes. You don't really have to extract the file to disk, because you also can run an XML parser on the string just as well as reading from a file.
There are two types of XML parsers, SAX and DOM. The SAX parser is faster but less memory-intensive, because it reads and parses an input file one tag at a time. You have only one tag at a time to work with and must track data yourself. In contrast, the DOM parser reads the entire file into memory and therefore provides better options for navigating and manipulating the XML nodes.
Let's look at examples of using both SAX and DOM, so you can see which one suits your purpose. The SAX example shows how to extract unique node names from an XML file. The DOM example illustrates how to read values from within specific nodes once the file has been read completely into memory.
First, let's use the SAX parser to see what nodes are available in the content.xml file. The code simply prints the name of each type node found in the file. Note that for different types of files you may get different node names (Listing 4).
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide