Internet Radio to Podcast with Shell Tools
It all started because I wanted to listen to “Hour of the Wolf” on WBAI radio—it's a cool science-fiction radio program hosted by Jim Freund that features readings, music, author interviews and good “I was there when...” kind of stories. Unfortunately for me, WBAI broadcasts from Long Island, New York, and is too far away from me to receive well. Plus, the show is on Saturday mornings from 5 to 7AM EST—not a really welcoming timeslot for us working folks.
Then, I discovered that WBAI has streaming MP3 audio on its Web site, which solved the reception problem. That left the Oh-Dark-Hundred problem—I'm normally settling into a deep sleep at that hour. And science-fiction buff or no, I'm not going to be catching Jim live any time soon.
What I needed was a VCR for Internet radio. Specifically, I wanted to capture the stream and save it to disk as an MP3 file, named with the show name and date. I would need to add the proper MP3 ID tags so I could load it into my Neuros audio player for convenient listening. It also would be awfully nice if I could let RSS-compatible software know that I've captured these files. That way, they would show up in a Firefox live bookmark or could be transferred to an iPod during charging. The ultimate effect would be to create an automatic podcast—a dynamically updated RSS feed with links to saved recordings—by snipping a single show out of an Internet media stream at regular intervals.
So, off I went to Google to search for “mp3 stream recording” and “tivo radio” and so on. I found many packages and Web sites, but nothing seemed quite right. Then, I heard a voice from my past—that of the great Master Foo in Eric S. Raymond's “The Rootless Root”, which said to me: “There is more UNIX-nature in one line of shell script than there is in ten thousand lines of C.” So, I wondered if I could accomplish the task using the tools already on the system, connected by a simple shell script.
You see, I already could play the stream by using the excellent MPlayer media player software. Due to patent problems, Fedora Core 3 doesn't ship with MP3 support, so I previously had downloaded and built MPlayer from source as part of the process of MP3-enabling my system. On a side note, MPlayer makes extensive use of the specific hardware features of each different CPU type, so it performs much better as a video player if it is built from source on the machine where you plan to use it. The command:
mplayer -cache 128 \ -playlist http://www.2600.com/wbai/wbai.m3u
served admirably to play the stream through my speakers. All that was left to do was convince MPlayer to save to disk instead. The MPlayer man page revealed -dumpaudio and -dumpfile <filename>, which work together to read the stream and silently save it out to disk, forever and ever. There's no time-out, so it captures until you kill the MPlayer process. Therefore, I wrote this script:
#!/bin/bash mplayer -cache 128 \ -playlist http://www.2600.com/wbai/wbai.m3u \ -dumpaudio -dumpfile test.mp3 & # the & sets the job running in the background sleep 30s kill $! # kill the most recently backgrounded job
which nicely captured a 30-or-so-second MP3 file to disk. The & character at the end of the mplayer command above is critical; it makes MPlayer run as a background task, so the shell script can continue past it to the next command, a timed sleep. Once the sleep is done, the script then kills the last backgrounded task, ending the recording. You may need to adjust the -cache value to suit your Internet connection or even substitute -nocache.
Now that part one was accomplished, I was on to part two—inserting the MP3 ID tags. Back on Google, I found id3v2, a handy little command-line program that adds tags to an MP3 file—and it's already in the Fedora Core distribution! It's amazing, the things that are lurking on your hard drive.
I now had the tools in place to capture and tag my favorite shows. With that in place, I was left with the task of coming up with some way to make a syndication feed from the stack of files. It turns out that RSS feeds are simple eXtensible Markup Language (XML) files that contain links to the actual data we want to feed, whether that be a Web page or, as in this case, an MP3 file.
Another quick look at Google brought me to the XML::RSS module for Perl. It's a complete set of tools that both can create new RSS files and add entries to existing ones. At this point, I thought I was almost done and put together a nice code example that almost worked. In true project timeline tradition, however, the last 5% of the project turned out to require 95% of the total time.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?