Streaming Audio in Linux
With the recent popularization of multimedia and the ever decreasing cost of hardware, the use of sound by developers in new applications is becoming more appealing each day. Sound can flavor a game, help in the user interface of a word processor, and more, making using a computer more fun and/or more effective.
There are two different forms of sound generation in the common sound boards. The first one is used primarily for music, through a MIDI interface and FM generators. The idea is to synthesize, artificially or by some previous recording samples, the occurrence of different instruments at different times. The second approach is to use digital audio signals, which can represent any sound, from a bird chirp to a human voice—even musical signals.
The human auditory system perceives differences in the pressure of the air that reaches it. When we pluck a guitar string, for example, it starts to vibrate. This movement propagates through the air in waves of compression and decompression, just as we can see waves over a lake's surface when we throw a small rock into it.
Figure 1 shows a detailed representation of the human auditory system, which processes and decomposes the incoming sound before sending the information to the brain. There is a lot of literature about processing human speech (see References 1 and 2), however, I'll give just a little of the basics rather than going into too much detail about the bio-medical considerations.
We “understand” fast oscillations of the air as sharp sounds and slow oscillations as bass sounds. When different oscillations are mixed together, we may lose this “musical” connotation. It is hard to classify the sound of a gun or the crash of a glass. Nevertheless, there are some mathematical tools able to decompose an arbitrary sound into a set of simple oscillations (e.g., Fourier Analysis), but these tools are quite beyond the scope of this article (see Reference 3).
A simple mathematical method for the representation of a sound phenomenon is the use of a function that describes it. This function is responsible for relating the intensity of the air pressure over a period of time. This approach is commonly called temporal representation and is also obtained when using electronic transducers (like microphones), which map electric voltages instead of air pressures over time. At the end of electronic processing, such as recording or amplifying, another transducer (a speaker, for example) is responsible for obtaining the air pressures from the electric intensities so that we can hear them.
For analog electronics this model is good enough; however, for the use of modern digital computers or DSPs (special devices for signal processing), there is one problem—how to keep and process the infinite information, contained in even a small interval of a function, in a limited amount of memory.
We have to work with digital audio, obtained from the continuous mathematic functional model, through two steps: sampling and quantization. The sampling step represents the domain of the function with only a limited number of points. This is usually done through uniform punctual sampling; in other words, we keep the value of the function only at some points, equally spaced from each other. It is clear that some loss of information may occur if by any chance the function varies too much between two adjacent samples (see Figure 2).
The number of samples taken at each second, the sampling rate, should be chosen with care. The Shannon Theorem states that if the sampling is done twice as fast as the “fastest oscillation” of the signal then it is possible to recover the original sound; this is called the Nyquist frequency sampling rate.
It is important to have a good idea of the type of signal that you are processing, in order to choose an appropriate sampling rate. Large sampling rates give better quality, but also require more memory and computer power to process and store.
The quantization step sets the value of the function at each discrete sample. A simple approach would be to use a float, thus requiring four bytes per sample. In general this is far beyond the required quality, and people use either one byte (unsigned character) or two byte samples (signed word).
Let's compare two concrete examples to see how these choices can dramatically change the required amount of memory to keep and process the sounds. On a compact disk the audio is stereo (two different tracks, left and right), at 44,100 samples per second, using two bytes to keep each sample. One minute of it will require: 2 X (60 X 44,100) X 2 = 10.5MB of storage.
In the case of voice mail or a telephone-based system where the controlling computer requires playback to the user, these numbers are quite different. There is no need for so many samples, since the phone lines have a serious quality limitation, usually these systems take 8000 samples per second, use only one channel (mono) and keep only one byte per sample. In the same way, for one minute we will need: 1 X (60 X 8000) X 1, about 480KB of information.
- Smoothwall Express
- Machine Learning Everywhere
- Bash Shell Script: Building a Better March Madness Bracket
- Own Your DNS Data
- Simple Server Hardening
- From vs. to + for Microsoft and Linux
- The Weather Outside Is Frightful (Or Is It?)
- Understanding OpenStack's Success
- Understanding Firewalld in Multi-Zone Configurations
- Ensono M.O.