Streaming Audio in Linux

This article presents a brief description of the nature and perception of sound, how to deal with sound as an object in a computer and the available software for streaming audio.
Sound Card

Since it is important to have your sound device fully functional, many configuration details should be checked, such as the sound support in the kernel (you can compile as a kernel module). Check the SOUND-HOWTO by Jeff Tranter ( at—it's a good reference.

Sound Files

Once we have the digital audio signal in hand it is important to encode it. There are many types of sound files (au, wav, aiff, mpeg), each with its pros and cons. Some of them simply stack the samples in a vector, byte after byte, while others try to compress the signal through transformations and sometimes heavy computations.

Fortunately, on sunsite ( you can find AFsp, from Peter Kabal (, a library that reads and writes these files for you. I have not used it, since by the time I found it, I had already written some code to do the same operations. AFsp is very well documented.

Simple Play Operation

There are many good sources of information around the Web on how to program the input and output of sound in Linux. The first you should check out is the Programmer's Guide to OSS at It contains all the information you need for the control and manipulation of different aspects of your audio hardware, like MIDI, mixing capabilities and, of course, digital audio.

In Linux the sound hardware is generally controlled through a device (normally /dev/audio). You have to activate it, with a call to open and set some parameters (sample rate, quantization method and mono or stereo) using ioctl (I/O control). These basic steps for playing or recording are well illustrated in the “Basic Audio” section of the guide mentioned above.

Playing is accomplished by a write operation of the vector of samples on the device, while the recording is done through a read. There are some subtle details that are being skipped, like the order in which these operations are to be done, for the sake of simplicity.

Streaming Audio and Interactive Applications

If you are planning to create applications that require real-time interaction (for example, a game engine) and have to continuously stream an audio sequence, there are some important measures to take to ensure that the audio buffer is neither overflowed nor underflowed.

The first case, overflows, can be solved by knowing a bit more about the OSS implementation (check the “Making Audio Complicated” page of the OSS Programmer's Guide). The buffer is partitioned in a number of equal pieces, and you can fill one of them while the others are in line for playing. Some ioctl calls will give you information about the total available space in the buffer, so that you can avoid blocking. You can also use IPC (Inter-Process Communication) techniques and create a different process responsible only for buffer manipulation.

When you send the audio to the output at a slower rate than the device plays, the buffer gets empty before you send more data causing an underflow. The resulting effect is disturbing and sometimes difficult to diagnose as an underflow problem. One possible solution is to output the audio at a slower rate, thus giving the computer more time to process the data.

Available Software

Looking at how other people write their programs helps to understand the inner difficulties of the implementation problems. So, I looked around the Net to see what was available and was quite impressed by both the quantity and the quality of the software I found.

A good, simple example of a “sound-effect server” is sfxserver by Terry Evans (, available on sunsite. It takes control of the audio device and receives commands (currently only from stdin) like “load a new effect”, starts playing the loaded effect, etc. In the same place you can also find sfxclient, an example of a program client.

Generic network audio systems have taken the approach of keeping the high-level, application development far away from the hardware and device manipulation. The Network Audio System (nas) is one implementation of this paradigm, having the same idea and framework of the X Windows system. It runs on many architectures such as Sun, SGI, HPUX and Linux. Through it you can write applications that take advantage of sound across the network, without worry about where you are actually working—the network layer takes care of everything for you. nas comes with documentation and many client examples. You can download it from sunsite along with a pre-compiled rpm package. Some games like XBoing and xpilot already support it.

Another network transportation implementation is netaudio ( It is not intended to work as an intermediate layer between applications and devices like nas and is only responsible for real-time transmission of data along the Net, allowing some interesting properties like rebroadcasting. The great advantage I saw was its compactness: the gzipped, tar file is around 6 KB. The basic idea is to use another program in a pipe-like structure to enable playing after reception (or recording for transmission). The README file gives examples of how to compress the audio with other programs to reduce the required amount of bandwidth thus becoming a free, real-time audio alternative for the Web.

A similar package, using LPC compressing methods (designed for voice only) is Speak Freely for Unix from John Walker (

Another interesting example of an audio streaming application is mpeg audio playing. The compression achieved by this method is incredible, making high-quality audio on demand possible through the Internet. Unfortunately a fast machine is required for real-time playing.

Again, looking at your sunsite mirror, you will find some implementations. One that is fast, interesting and attention-getting is mpg123 from Michael Hipp ( and Oliver Fromme ( The official web page is located at