At the Forge - Aggregating Syndication Feeds
Over the last few months, we have looked at RSS and Atom, two XML-based file formats that make it easy to create and distribute summaries of a Web site. Although such syndication, as it is known, traditionally is associated with Weblogs and news sites, there is growing interest in its potential for other uses. Any Web-based information source is a potentially interesting and useful candidate for either RSS or Atom.
So far, we have looked at ways in which people might create RSS and Atom feeds for a Web site. Of course, creating syndication feeds is only one half of the equation. Equally as important and perhaps even more useful is understanding how we can retrieve and use syndication feeds, both from our own sites and from other sites of interest.
As we have seen, three different types of syndication feeds exist: RSS 0.9x and its more modern version, RSS 2.0; the incompatible RSS 1.0; and Atom. Each does roughly the same thing, and there is a fair amount of overlap among these standards. But networking protocols do not work well when we assume that everything is good enough or close enough, and syndication is no exception. If we want to read all of the syndicated sites, then we need to understand all of the different protocols, as well as versions of those protocols. For example, there actually are nine different versions of RSS, which when combined with Atom, brings us to a total of ten different syndication formats that a site might be using. Most of the differences probably are negligible, but it would be foolish to ignore them completely or to assume that everyone is using the latest version. Ideally, we would have a module or tool that allows us to retrieve feeds from a variety of different protocols, papering over the differences as much as possible while still taking advantage of each protocol's individual power.
This month, we look at the Universal Feed Parser, an open-source solution to this problem written by Mark Pilgrim. Pilgrim is a well-known Weblog author and Python programmer, and he also was one of the key people involved in the creation of the Atom syndication format. This should come as no surprise, given the pain that he experienced in writing the Universal Feed Parser. It also handles CDF, a proprietary Microsoft format used for the publication of such items as active desktop and software updates. This part might not be of interest to Linux desktop users, but it raises interesting possibilities for organizations with Microsoft systems installed. The Universal Feed Parser (feedparser), in version 3.3 as of this writing, appears to be the best tool of its kind, in any language, and regardless of licensing.
Installing feedparser is extremely simple. Download the latest version, move into its distribution directory and type python setup.py install. This activates Python's standard installation utility, placing the feedparser in your Python site-packages directory. Once you have done installed feedparser, you can test it using Python interactively, from a shell window:
>>> import feedparser
The >>> symbols are Python's standard prompt when working in interactive mode. The above imports the feedparser module into Python. If you have not installed feedparser, or if something went wrong with the installation, executing this command results in a Python ImportError.
Now that we have imported our module into memory, let's use it to look at the latest news from Linux Journal's Web site. We type:
>>> ljfeed = feedparser.parse ↪("http://www.linuxjournal.com/news.rss")
We do not have to indicate the protocol or version of the feed we are asking feedparser to work with—the package is smart enough to determine such versioning on its own, even when the RSS feed fails to identify its version. At the time of writing, the LJ site is powered by PHPNuke and the feed is identified explicitly as RSS 0.91.
Now that we have retrieved a new feed, we can find out exactly how many entries we received, which largely is determined by the configuration of the server:
Of course, the number of items is less interesting than the items themselves, which we can see with a simple for loop:
>>> for entry in ljfeed.entries: ... print entry['title'] ...
Remember to indent the print statement to tell Python that it's part of the loop. If you are new to Python, you might be surprised by the lines that begin with ... and indicate that Python is ready and waiting for input after the for. Simply press <Enter> to conclude the block begun by for, and you can see the latest titles.
We also can get fancy, looking at a combination of URL and title, using Python's string interpolation:
>>> for entry in ljfeed.entries: ... print '<a href="%s">%s</a>' % \ ... (entry['link'], entry['title'])
As I indicated above, feedparser tries to paper over the differences between different protocols, allowing us to work with all syndicated content as if it were roughly equivalent. I thus can repeat the above commands with the syndication feed from my Weblog. I recently moved to WordPress, which provides an Atom feed:
>>> altneufeed = feedparser.parse( ... "http://altneuland.lerner.co.il/wp-atom.php") >>> for entry in altneufeed.entries: ... print '<a href="%s">%s</a>' % \ ... (entry.link, entry.title)
Notice how this last example uses attributes entry.link and entry.title, while the previous example uses dictionary keys entry['link'] and entry['title']. feedparser tries to be flexible, providing several interfaces to the same information to suit different needs and styles.
- Give new life to old phones and tablets with these tips!
- Memory Ordering in Modern Microprocessors, Part I
- Linux Systems Administrator
- Senior Perl Developer
- Source Code Scanners for Better Code
- Technical Support Rep
- Putlocker!! Watch Begin Again Online 2014 Streaming Full Movie
- Tech Tip: Really Simple HTTP Server with Python