How well-connected are you? Drew Streib can tell you to four decimal places. Drew, who now runs an OpenPGP keyserver in addition to his other thankless tasks, is currently publishing monthly reports on how closely OpenPGP users are connected to the Web of Trust. His math, based on earlier calculations by Neal McBurnett, is complicated, but the result is a current map of the community's Web of Trust.
Closest to the center of the Web are crypto luminaries and organizers of keys-signing events, including Peter N. Wan, Ingmar Camphausen and Theodore Ts'o. Philip R. Zimmermann, who wrote the original PGP, is only number 24.
Drew's report comes at an exciting time for encrypted mail. GNU Privacy Guard, a free OpenPGP implementation, is available in common distributions, support in popular mailers such as mutt makes encryption convenient to use and the FBI's much-publicized Carnivore snooping system certainly hasn't hurt.
Signing people's keys to do better in Drew's rankings might seem like a pointless game, but it really does expand the Web of Trust. You can never lose juice by exchanging signatures with someone else, and it helps everyone's ability to send trusted, encrypted mail. Even if you sign the key of some “lamer” at the bottom of the list, you'll both move up next month. (As for me, I got a Theodore Ts'o! Look out next month.)
On Monday, July 30, 2001 the US Copyright Office convened the Copyright Arbitration Royalty Panel (yes, CARP, loc.gov/copyright/carp) to make a decision shortly on conditions under which webcasters will be required to make royalty payments. The results could be highly inconvenient for webcasters of all kinds. Howard Greenstein, a webcasting pioneer, puts it this way in his weblog:
Webcasters, many of whom have been accounting for what they have estimated they would have to pay under a negotiated compulsory license (and putting aside revenue for years) are about to find out (within 60 days) what it will cost them. Unless, of course, they are an “interactive” station. If you're a standard station under the Digital Millenium Copyright Act, you play music in a certain way. You don't give people much choice about what they hear.
Yet the number of streaming sources on the Net runs into uncounted thousands (or perhaps millions). What's more, many of these are far more interactive than traditional broadcasting has ever been or can even comprehend. What's the news for them? Easy: work outside the system.
That's what KPIG has been doing since it became the first commercial radio station ever to broadcast on the Web. KPIG broadcasts from (no kidding) Freedom, California on 107-oink-5 on the FM band. On the Web, however, KPIG is a virtual Idaho. Its 128KB MP3 stream is one of the Web's hi-fi music beacons. So are the half-dozen or so other streams the station puts out at various speeds for various clients and bandwidths (and with content other than KPIG alone). Naturally (their site reports) they digitize that content on a Linux PC with an open-source LAME MP3 encoder (mp3dev.org/mp3).
KPIG, which once described its format as “mutant cowboy rock and roll”, is one of the few remaining commercial stations where the disc jockeys still choose the music, and community ties are so close it's hard to tell where the station ends and its constituency begins. As a successful business (it has always done pretty well in the ratings and sells plenty of advertising), KPIG also has managed to remain both artist- and industry-friendly. Every song the station plays is listed live on the Web, along with links that make it easy to buy the CD, research the artist or follow a tour schedule. Without a doubt, KPIG owns the high-mud mark for combining commercial success, community involvement, resourceful use of free and open-source software and adaptiveness to a surreally perverse environment.
The hacker in chief at KPIG is “Wild Bill” Goldsmith, one of KPIG's Founding Farmers and the proprietor of RadioParadise.com. Unencumbered by the need to participate in the fully regulated environment of commercial broadcasting, Radio Paradise is beating a path through the uncharted wilderness where artists and technically smart connoisseurs will rebuild their own industry from the outside in. Asked for the technical angle on Radio Paradise, Bill writes:
[Radio Paradise is] based on a set of software tools—for picking and scheduling music and doing voice tracks from anywhere over the Net, and for accepting and organizing listener feedback on my playlist. Everything I'm doing software-wise is 100% open source: Linux, PHP, Perl, Postgres, and Icecast.
I am convinced that what you see at radioparadise.com represents the future of radio, or of quality radio, anyway: very interactive, tightly controlled artistically (no random segues, everything happens for a reason)--completely free from the influences of the radio/music industry hype machine (to the best of my ability, anyway)--and supported primarily by voluntary contributions from listeners.
This isn't a game plan that's going to make anyone rich. But it can make it possible for anyone with talent to make a very comfortable living without compromizing their integrity in any way—and that's all I for one have ever wanted.
I'm an old radio freak and have been a fan of KPIG and its ancestors going back to the Sixties. Living, breathing radio stations like KPIG, run by people who love the business more for the good it does than for the money it makes, have gone out like candles in the rain—first one by one, then by the dozens and finally by the thousands.
It's not surprising to find a hacker starting a bonfire with the last candle that stands.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Readers' Choice Awards
- BASH script to log IPs on public web server
3 hours 13 min ago
6 hours 49 min ago
- Reply to comment | Linux Journal
7 hours 21 min ago
- All the articles you talked
9 hours 45 min ago
- All the articles you talked
9 hours 48 min ago
- All the articles you talked
9 hours 50 min ago
14 hours 14 min ago
- Keeping track of IP address
16 hours 5 min ago
- Roll your own dynamic dns
21 hours 19 min ago
- Please correct the URL for Salt Stack's web site
1 day 30 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?