Creating a Planet Me Blog Aggregator
The Planet Project allows on-line communities to build a central Web page easily, aggregating blogs from people in their community. The Planet code powers such community blogs as Planet GNOME and Planet Apache. Such on-line uses of the Planet code provide a low cost of entry for people to keep an eye on a community. This article focuses on using the Planet code on your local machine to create your own custom blog aggregator.
The Planet code requires Python 2.2 or later. The simplest method to install Planet is to download a nightly snapshot tarball from the planetplanet.org Web site and extract it to your home directory. I tend to rename the extracted planet-nightly directory to include its day of download and use a handy link to the current version of Planet Me.
In this article, I've used references to the path of my home directory a few times; remember to substitute your own home directory in the examples.
Listing 1. Installing Planet
$ cd ~ $ tar xjvf planet-nightly.tar.bz2 $ planetdated=planet-$(date +'%d%b%y') $ mv planet-nightly $planetdated; $ ln -s $planetdated planet $ cd planet $ cp -av fancy-examples me-meta $ cd me-meta $ cp ../examples/*.xml* . $ edit config.ini name = Planet Me link = file://home/ben/planet/me/index.html owner_name = John Doe owner_email = root@localhost # later in the file # template_files should all be on one line template_files = me-meta/index.html.tmpl me-meta/rss20.xml.tmpl me-meta/rss10.xml.tmpl me-meta/opml.xml.tmpl me-meta/foafroll.xml.tmpl # later in the file change # fancy-examples/index.html.tmpl [me-meta/index.html.tmpl] items_per_page = 30 $ cd .. $ mkdir cache $ ln -s output me # Without proxy $ python planet.py me-meta/config.ini # Using a standard squid proxy on "dairiserver" $ http_proxy=http://dairiserver:3128/ \ python planet.py me-meta/config.ini
The two final commands in Listing 1 show how to fetch current news feeds and set up your initial Planet. The commands will vary depending on things such as whether or not you have to use a proxy server to access the Internet. After running these commands, you should have a Planet Me viewable in your Web browser at ~/planet/me/index.html. After doing these steps, your planet should look similar to Figure 1.
You'll want to customize which news feeds you are viewing. This is done at the end of me-meta/config.ini. The configuration file defines a section by text surrounded by square brackets. Options for a section follow its initial definition as key=value pairs. You define each blog to aggregate in a section where you specify the URL of the RSS feed for the section name. See Listing 2 for an example from the default config.ini file.
The name will be shown in the header for each aggregated post from that blog, and the face image will be on the right side when using the default HTML templates. The facewidth and faceheight are optional by default.
Listing 2. Sample Aggregation Definition
[http://www.gnome.org/~jdub/blog/?flav=rss] name = Jeff Waugh face = jdub.png facewidth = 70 faceheight = 74
Many sites provide handy topic icons that can be used to spruce up your Planet Me. For example, in Listing 3, I use one of the Slashdot section icons (see the on-line Resources) for news items taken from Slashdot's RSS feed.
Assuming you use the Planet setup as described in this article, the topic icons are stored in ~/planet/me/images. You can see the setup for my Slashdot topic icon in Listing 3.
Listing 3. How to Get the Image from Slashdot
$ cd ~/planet/me/images/ $ wget \ http://images.slashdot.org/topics/topicslashback.gif # convert is from ImageMagick $ convert topicslashback.gif slashdot.png
Listing 4 shows the new section to append to the config.ini to integrate the Slashdot icon into your Planet Me.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide