Internet Radio to Podcast with Shell Tools
It may seem strange that I'm calling a scripting language from another scripting language. The point is that I'm using each to do the things it's best at. Bash is designed to execute commands, and it's really easy to start a background process, find out its process ID and kill it again. On the other hand, trying to add an XML entry in Bash using the more basic string-handling tools, such as sed and grep, would have been, well, exactly the kind of thing that drove Larry Wall to write Perl in the first place.
Now that we have a script, we make the file executable and run it:
chmod +x catchthewolf ./catchthewolf
which results in a properly tagged MP3 file and a new entry in the wolfrss.xml RSS feed. When testing, you can uncomment the 30-second test line to make sure everything's working properly, but be sure to comment it back out before trying to catch a show.
Now all that's left is to get our computer to run this thing at 5AM on Saturday. That's done by using the system's cron utility—invoke crontab -e— and adding an entry like this:
MAILTO=phil # Testing: mail script output to me # Catch hour of the wolf 5AM Saturdays 59 4 * * sat /home/phil/catchthewolf
crontab's editor is most likely to be set to vi-style commands, so you have to use i to start typing and <Esc>:wq to save-and-exit. When you're done, you should see this message:
crontab: installing new crontab
which says you're all set. Check man 5 crontab for more information on how to make jobs repeat every day, once a month or whatever. You also want to make sure your user name is in the file /etc/cron.allow—the list of who can run jobs on the system's scheduler. If you're running on a remote system, verify with the administrators that you're allowed to run cron jobs.
To see the resulting podcast, point your RSS-aware software at the XML file the script creates. In Firefox, use Bookmarks→Manage Bookmarks→Add Live Bookmark, and remember to enter the URL starting with file:// and not the filename itself.
By taking two programs already on the hard drive, downloading two Perl modules and writing a few lines of shell script, we have assembled a homebrew Webcast recording system that saves our favorite programs for us to listen to whenever we choose. It also lets us know what it has done by popping up live bookmarks in Firefox and automatically transfers the recordings to our MP3 player. Some scripts for capturing other Internet radio shows will be available on the Linux Journal FTP site (see the on-line Resources). Now I just have to remember to delete the older files before my hard drive fills up with leftover Webcasts.
Thanks to Anne Troop, Jen Hamilton and Chris Riley for their many shell-scripting hints over the years; to Anne's friend Janeen Pisciotta for finding “Hour of the Wolf” for us in the first place; and to LJ Editor in Chief Don Marti for the cool podcast idea.
When streaming radio first came out, it often was transmitted in proprietary data formats, making it tough for Linux users to listen. Now most streams are MP3, but there still may be something in a different format that you want to capture, such as BBC Radio's RealPlayer streams—see the on-line Resources for a link. Assuming that it's something MPlayer can handle, we simply can rearrange our process a bit. Tell MPlayer to write audio data to the disk in the form of a WAV file and then encode it using lame for MP3 or oggenc for ogg files. Be aware, though, that lame is not included with Fedora, again due to patent issues.
The audio capture commands then would look like:
# Use mplayer to capture the stream # at $STREAM to the file $FILE /usr/local/bin/mplayer -really-quiet -cache 500 \ -ao pcm:file="$FILE.wav" -playlist $STREAM & # the & turns the capture into a background job sleep $DURATION # wait for the show to be over kill $! # kill the stream capture # Encode to .ogg, quality 2, and tag the file oggenc -q 2 -t $TITLE -a $AUTHOR -l $ALBUM \ -n "1/1" -G "Radio" -R 16000 -o $FILE $FILE.wav rm $FILE.wav # Remove the raw audio data file
followed by the original call to the Perl script. No need to use id3v2 here, as both the lame and oggenc encoders insert tags as part of the encoding process. We wind up with the same result as capturing an MP3 stream directly. But because of the intermediate WAV file's large size, we need much more disk space during the actual capture process. The optional -R 16000 specifies the sample rate of the captured WAV file—this is needed only if MPlayer does not correctly detect the speed of the incoming audio stream and your captured MP3 sounds like whale song or chipmunks. You probably want to comment out the rm command until you're sure the encoding is working the way you want it to and remove the WAV files manually until then.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide