At the Forge - Aggregating Syndication Feeds

by Reuven M. Lerner

Over the last few months, we have looked at RSS and Atom, two XML-based file formats that make it easy to create and distribute summaries of a Web site. Although such syndication, as it is known, traditionally is associated with Weblogs and news sites, there is growing interest in its potential for other uses. Any Web-based information source is a potentially interesting and useful candidate for either RSS or Atom.

So far, we have looked at ways in which people might create RSS and Atom feeds for a Web site. Of course, creating syndication feeds is only one half of the equation. Equally as important and perhaps even more useful is understanding how we can retrieve and use syndication feeds, both from our own sites and from other sites of interest.

As we have seen, three different types of syndication feeds exist: RSS 0.9x and its more modern version, RSS 2.0; the incompatible RSS 1.0; and Atom. Each does roughly the same thing, and there is a fair amount of overlap among these standards. But networking protocols do not work well when we assume that everything is good enough or close enough, and syndication is no exception. If we want to read all of the syndicated sites, then we need to understand all of the different protocols, as well as versions of those protocols. For example, there actually are nine different versions of RSS, which when combined with Atom, brings us to a total of ten different syndication formats that a site might be using. Most of the differences probably are negligible, but it would be foolish to ignore them completely or to assume that everyone is using the latest version. Ideally, we would have a module or tool that allows us to retrieve feeds from a variety of different protocols, papering over the differences as much as possible while still taking advantage of each protocol's individual power.

This month, we look at the Universal Feed Parser, an open-source solution to this problem written by Mark Pilgrim. Pilgrim is a well-known Weblog author and Python programmer, and he also was one of the key people involved in the creation of the Atom syndication format. This should come as no surprise, given the pain that he experienced in writing the Universal Feed Parser. It also handles CDF, a proprietary Microsoft format used for the publication of such items as active desktop and software updates. This part might not be of interest to Linux desktop users, but it raises interesting possibilities for organizations with Microsoft systems installed. The Universal Feed Parser (feedparser), in version 3.3 as of this writing, appears to be the best tool of its kind, in any language, and regardless of licensing.

Installing feedparser

Installing feedparser is extremely simple. Download the latest version, move into its distribution directory and type python setup.py install. This activates Python's standard installation utility, placing the feedparser in your Python site-packages directory. Once you have done installed feedparser, you can test it using Python interactively, from a shell window:


>>> import feedparser

The >>> symbols are Python's standard prompt when working in interactive mode. The above imports the feedparser module into Python. If you have not installed feedparser, or if something went wrong with the installation, executing this command results in a Python ImportError.

Now that we have imported our module into memory, let's use it to look at the latest news from Linux Journal's Web site. We type:

>>> ljfeed = feedparser.parse
↪("http://www.linuxjournal.com/news.rss")

We do not have to indicate the protocol or version of the feed we are asking feedparser to work with—the package is smart enough to determine such versioning on its own, even when the RSS feed fails to identify its version. At the time of writing, the LJ site is powered by PHPNuke and the feed is identified explicitly as RSS 0.91.

Now that we have retrieved a new feed, we can find out exactly how many entries we received, which largely is determined by the configuration of the server:

>>> len(ljfeed.entries)

Of course, the number of items is less interesting than the items themselves, which we can see with a simple for loop:

>>> for entry in ljfeed.entries:
...     print entry['title']
...

Remember to indent the print statement to tell Python that it's part of the loop. If you are new to Python, you might be surprised by the lines that begin with ... and indicate that Python is ready and waiting for input after the for. Simply press <Enter> to conclude the block begun by for, and you can see the latest titles.

We also can get fancy, looking at a combination of URL and title, using Python's string interpolation:


>>> for entry in ljfeed.entries:
...     print '<a href="%s">%s</a>' % \
...     (entry['link'], entry['title'])

As I indicated above, feedparser tries to paper over the differences between different protocols, allowing us to work with all syndicated content as if it were roughly equivalent. I thus can repeat the above commands with the syndication feed from my Weblog. I recently moved to WordPress, which provides an Atom feed:


>>> altneufeed = feedparser.parse(
... "http://altneuland.lerner.co.il/wp-atom.php")
>>> for entry in altneufeed.entries:
...     print '<a href="%s">%s</a>' % \
...     (entry.link, entry.title)

Notice how this last example uses attributes entry.link and entry.title, while the previous example uses dictionary keys entry['link'] and entry['title']. feedparser tries to be flexible, providing several interfaces to the same information to suit different needs and styles.

How New Is that News?

The point of a news aggregator or other application that uses RSS and Atom is to collect and present newly updated information. An aggregator can show only the items that a server provides; if an RSS feed includes only the two most recently published items, then it becomes the aggregator's responsibility to poll, cache and display those items no longer being syndicated and summarized.

This raises two different but related questions: How can we ensure that our aggregator displays only items we have not seen yet? And is there a way for our aggregator to reduce the load on Weblog servers, retrieving only those items that were published since our last visit? Answering the first question requires looking at the modification date, if it exists, for each item.

The latter question has, as of this writing, been an increasingly popular issue of debate in the Web community. As a Weblog grows in popularity, the number of people who subscribe to its syndication feed also grows. If a Weblog has 500 subscribers to its syndication feed, and if each of these subscribers' aggregators look for updates each hour, that means an additional 500 requests per hour are made against a Web server. If the syndication feed provides the site's entire content, this can result in a great deal of wasted bandwidth—reducing the site's response time for other visitors and potentially forcing the site owner to pay for exceeding allocated monthly bandwidth.

feedparser allows us to be kind to syndicating servers and ourselves by providing a mechanism for retrieving a syndication feed only when there is something new to show. This is possible because modern versions of HTTP allow the requesting client to include an If-Modified-Since header, followed by a date. If the requested URL has changed since the date mentioned in the request, the server responds with the URL's content. But if the requested URL is unchanged, the server returns a 304 response code, indicating that the previously downloaded version remains the most current content.

We accomplish this by passing an optional modified parameter to our call to feedparser.parse(). This parameter is a standard, as defined by the time module, Python tuple in which the first six elements are the year, month number, day number, hour, minutes and seconds. The final three items don't concern us, and can be left as zeroes. So if I were interested in seeing feeds posted since September 1, 2004, I could say:

last_retrieval = (2004, 9, 1, 0, 0, 0, 0, 0, 0)
ljfeed = feedparser.parse(
         "http://www.linuxjournal.com/news.rss")

If Linux Journal's server is configured well, the above code either results in ljfeed containing the complete syndication feed—returned with an HTTP OK status message, with a numeric code of 200--or an indication that the feed has not changed since its last retrieval, with a numeric code of 304. Although keeping track of the last time you requested a particular syndication feed might require more record-keeping on your part, it is important to do, especially if you requestfeed updates on a regular basis. Otherwise, you might find your application unwelcome at certain sites.

Working with Feeds

Now that we have a basic idea of how to work with feedparser, let's create a simple aggregation tool. This tool gets its input from a file called feeds.txt and produces its output in the form of an HTML file called feeds.html. Running this program by cron and looking at the resulting HTML file once per day provides a crude-but-working news feed from the sites that most interest you.

Feeds.txt contains URLs of actual feeds rather than of the sites from which we would like to get the feed. In other words, it's up to the user to find and enter the URL for each feed. More sophisticated aggregation tools usually are able to determine the feed's URL from a link tag in the header of the site's home page.

Also, despite my above warning that every news aggregator should keep track of its most recent request so as not to overwhelm servers, this program leaves out such features as part of my attempt to keep it small and readable.

The program, aggregator.py, can be read in Listing 1 and is divided into four parts:

  1. We first open the output file, which is an HTML-formatted text file called myfeeds.html. The file is designed to be used from within a Web browser. If you are so inclined, you could add this local file, which has a file:/// URL, to your list of personal bookmarks or even make it your startup page. After making sure that we indeed can write to this file, we start the HTML file.

  2. We then read the contents of feeds.txt, which contains one feed URL per line. In order to avoid problems with whitespace or blank lines, we strip off the whitespace and ignore any line without at least one printable character.

  3. Next, we iterate over the list of feeds, feeds_list, invoking feedparser.parse() on that URL. When we receive a response, we write it to the output file, myfeeds.html, with both the URL and the title of the article.

  4. Finally, we close the HTML and the file.

Listing 1. aggregator.py


#!/usr/bin/python

import feedparser
import sys

# ---------------------------------------------------
# Open the personal feeds output file

aggregation_filename = "myfeeds.html"
max_title_chars = 60

try:
    aggregation_file = open(aggregation_filename,"w")
    aggregation_file.write("""<html>
<head><title>My news</title></head>
<body>""")
except IOError:
    print "Error: cannot write '%s' " % \
    aggregation_filename
    exit

# ---------------------------------------------------
# Each non-blank line in feeds.txt is a feed source.

feeds_filename = "feeds.txt"
feeds_list = []

try:
    feeds_file = open(feeds_filename, 'r')
    for line in feeds_file:
        stripped_line = line.strip().rstrip()

        if len(stripped_line) > 0:
            feeds_list.append(stripped_line)
            sys.stderr.write("Adding feed '" + \
            stripped_line + "'\n")

    feeds_file.close()

except IOError:
    print "Error: cannot read '%s' " % feeds_filename
    exit

# ---------------------------------------------------
# Iterate over feeds_list, grabbing the feed for each

for feed_url in feeds_list:
    sys.stderr.write("Checking '%s'..." % feed_url)
    feed = feedparser.parse(feed_url)
    sys.stderr.write("done.\n")

    aggregation_file.write('<h2>%s</h2>\n' % \
                           feed.entries[0].title)

    # Iterate over each entry from this feed,
    # displaying it and putting it in the summary
    for entry in feed.entries:
        sys.stderr.write("\tWrote: '%s'" % \
                      entry.title[0:max_title_chars])

        if len(entry.title) > max_title_chars:
            sys.stderr.write("...")

        sys.stderr.write("\n")

        aggregation_file.write(
           '<li><a href="%s">%s</a>\n' %
           (entry.link, entry.title))

    aggregation_file.write('</u2>\n')

# ---------------------------------------------------
# Finish up with the HTML

aggregation_file.write("""</body>
</html>
""")
aggregation_file.close()


As you can see from looking at the code listing, creating such a news aggregator for personal use is fairly simple and straightforward. This is merely a skeletal application, however. To be more useful in the real world, we probably would want to move feeds.txt and myfeeds.html into a relational database, determine the feed URL automatically or semi-automatically based on a site URL and handle categories of feeds, so that multiple feeds can be read as if they were one.

If the above description sounds familiar, then you might be a user of Bloglines.com, a Web-based blog aggregator that probably works in the above way. Obviously, Bloglines handles many more feeds and many more users than we had in this simple toy example. But, if you are interested in creating an internal version of Bloglines for your organization, the combination of the Universal Feed Parser with a relational database, such as PostgreSQL, and some personalization code is both easy to implement and quite useful.

Conclusion

The tendency to reinvent the wheel often is cited as a widespread problem in the computer industry. Mark Pilgrim's Universal Feed Parser might fill only a small need in the world of software, but that need is almost certain to grow as the use of syndication increases for individuals and organizations alike. The bottom line is if you are interested in reading and parsing syndication feeds, you should be using feedparser. It is heavily tested and documented, often updated and improved and it does its job quickly and well.

Reuven M. Lerner, a longtime Web/database consultant and developer, now is a graduate student in the Learning Sciences program at Northwestern University. His Weblog is at altneuland.lerner.co.il, and you can reach him at reuven@lerner.co.il.

Load Disqus comments

Firstwave Cloud