At the Forge - Aggregating Syndication Feeds

 in
So far, we have looked at ways in which people might create RSS and Atom feeds for a Web site. Of course, creating syndication feeds is only one half of the equation. Equally as important and perhaps even more useful is understanding how we can retrieve and use syndication feeds, both from our own sites and from other sites of interest.

Over the last few months, we have looked at RSS and Atom, two XML-based file formats that make it easy to create and distribute summaries of a Web site. Although such syndication, as it is known, traditionally is associated with Weblogs and news sites, there is growing interest in its potential for other uses. Any Web-based information source is a potentially interesting and useful candidate for either RSS or Atom.

So far, we have looked at ways in which people might create RSS and Atom feeds for a Web site. Of course, creating syndication feeds is only one half of the equation. Equally as important and perhaps even more useful is understanding how we can retrieve and use syndication feeds, both from our own sites and from other sites of interest.

As we have seen, three different types of syndication feeds exist: RSS 0.9x and its more modern version, RSS 2.0; the incompatible RSS 1.0; and Atom. Each does roughly the same thing, and there is a fair amount of overlap among these standards. But networking protocols do not work well when we assume that everything is good enough or close enough, and syndication is no exception. If we want to read all of the syndicated sites, then we need to understand all of the different protocols, as well as versions of those protocols. For example, there actually are nine different versions of RSS, which when combined with Atom, brings us to a total of ten different syndication formats that a site might be using. Most of the differences probably are negligible, but it would be foolish to ignore them completely or to assume that everyone is using the latest version. Ideally, we would have a module or tool that allows us to retrieve feeds from a variety of different protocols, papering over the differences as much as possible while still taking advantage of each protocol's individual power.

This month, we look at the Universal Feed Parser, an open-source solution to this problem written by Mark Pilgrim. Pilgrim is a well-known Weblog author and Python programmer, and he also was one of the key people involved in the creation of the Atom syndication format. This should come as no surprise, given the pain that he experienced in writing the Universal Feed Parser. It also handles CDF, a proprietary Microsoft format used for the publication of such items as active desktop and software updates. This part might not be of interest to Linux desktop users, but it raises interesting possibilities for organizations with Microsoft systems installed. The Universal Feed Parser (feedparser), in version 3.3 as of this writing, appears to be the best tool of its kind, in any language, and regardless of licensing.

Installing feedparser

Installing feedparser is extremely simple. Download the latest version, move into its distribution directory and type python setup.py install. This activates Python's standard installation utility, placing the feedparser in your Python site-packages directory. Once you have done installed feedparser, you can test it using Python interactively, from a shell window:


>>> import feedparser

The >>> symbols are Python's standard prompt when working in interactive mode. The above imports the feedparser module into Python. If you have not installed feedparser, or if something went wrong with the installation, executing this command results in a Python ImportError.

Now that we have imported our module into memory, let's use it to look at the latest news from Linux Journal's Web site. We type:

>>> ljfeed = feedparser.parse
↪("http://www.linuxjournal.com/news.rss")

We do not have to indicate the protocol or version of the feed we are asking feedparser to work with—the package is smart enough to determine such versioning on its own, even when the RSS feed fails to identify its version. At the time of writing, the LJ site is powered by PHPNuke and the feed is identified explicitly as RSS 0.91.

Now that we have retrieved a new feed, we can find out exactly how many entries we received, which largely is determined by the configuration of the server:

>>> len(ljfeed.entries)

Of course, the number of items is less interesting than the items themselves, which we can see with a simple for loop:

>>> for entry in ljfeed.entries:
...     print entry['title']
...

Remember to indent the print statement to tell Python that it's part of the loop. If you are new to Python, you might be surprised by the lines that begin with ... and indicate that Python is ready and waiting for input after the for. Simply press <Enter> to conclude the block begun by for, and you can see the latest titles.

We also can get fancy, looking at a combination of URL and title, using Python's string interpolation:


>>> for entry in ljfeed.entries:
...     print '<a href="%s">%s</a>' % \
...     (entry['link'], entry['title'])

As I indicated above, feedparser tries to paper over the differences between different protocols, allowing us to work with all syndicated content as if it were roughly equivalent. I thus can repeat the above commands with the syndication feed from my Weblog. I recently moved to WordPress, which provides an Atom feed:


>>> altneufeed = feedparser.parse(
... "http://altneuland.lerner.co.il/wp-atom.php")
>>> for entry in altneufeed.entries:
...     print '<a href="%s">%s</a>' % \
...     (entry.link, entry.title)

Notice how this last example uses attributes entry.link and entry.title, while the previous example uses dictionary keys entry['link'] and entry['title']. feedparser tries to be flexible, providing several interfaces to the same information to suit different needs and styles.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

a small semantic error in aggregator.py

zied's picture

Hi,
In aggregator.py, instead of the feed's title there's the first feed title :
aggregation_file.write('%s\n' % \
feed.entries[0].title)

I would suggest you this :
aggregation_file.write('%s\n' % \
feed.channel.title)

bye

Share what you learn what you don't

Install error

midijery's picture

I came up with an error also. I'm running SUSE 9.1 and on installing as per instructions came up with anerror:
No module named distutils.core
Ive been trying to work with Linux for many years and it's getting much more user freindly, but coming up with errors like this only lead to frustration.

Not so simple install

maskedfrog's picture

I can't speak for other distro's but on Mandrake 10.1 and likely previous versions libpython2.x-devel must be installed not just python.

Installing feedparser is extremely simple. Download the latest version, move into its distribution directory and type
python setup.py install.
This activates Python's standard installation utility, placing the feedparser in your Python site-packages directory. Once you have done installed feedparser, you can test it using Python interactively, from a shell window:

This will quickly result in feedback of:

error: invalid Python installation: unable to open
/usr/lib/python2.3/config/Makefile (No such file or directory)

or similar unless libpythonX.x-devel is installed.
Apparently this applies to RedHat fedora also.

Other than that, haven't checked into the code sample from the first reply, this is a fine article that I hope will get me started on my own personal aggregator so I can replace Knewsticker with a robust and site friendly aggregator. And not get banned at /. again (-:

Download link, and example code typo

nathanst's picture

The article doesn't seem to actually say where feedparser can be downloaded from (and there is no "resources" link for this article). Presumably this is the site in question:
http://www.feedparser.org/

Also, in the How New Is that News? section, it looks like the code snippet is actually missing the "modified" parameter in the function call. I think those lines should be:


last_retrieval = (2004, 9, 1, 0, 0, 0, 0, 0, 0)
ljfeed = feedparser.parse("http://www.linuxjournal.com/news.rss",
              modified=last_retrieval )

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState