The Linux Softsynth Roundup
Software sound synthesis (SWSS) has an honorable lineage in the history of computers. Early experiments in digital sound synthesis took place at the famous Bell Labs, where a team of researchers centered around Max Mathews created the Music N series of audio synthesis software, culminating in Music V in 1969. Since that time, Music V has evolved into a series of notable digital sound synthesis environments, such as Csound, Cmix/RTCmix and Common LISP Music. These environments typically provide the user with a language for specifying the nature of sonic events, such as musical notes or sampled sounds. These languages usually present users with a distinction between instruments (the sound-producing designs) and scores (event characteristics, such as, start time, duration and synthesis parameters). Users compose their instruments and scores in their preferred SWSS language and then feed them to the language's compiler. Output is directed to a file, which then can be played by any sound system supporting the file format or, with sufficiently powerful hardware, the output can be directed to a digital-to-audio converter for rendering real-time audio output.
A standalone software synthesizer (softsynth) substitutes real-time control for the score aspect of the model above. Softsynths typically come with attractive GUIs, often emulating the appearance and operation of a hardware synthesizer, and a MIDI keyboard or external sequencer is the expected controller. Under the right circumstances, a softsynth can be controlled by a concurrent process. For example, using the ALSA aconnect utility, a softsynth can be wired to a MIDI sequencer running on the same machine. Then, sequences can be recorded and played via the softsynth, eliminating the need for an external synthesizer and containing the MIDI environment on a single computer.
A softsynth can be dedicated to a particular synthesis method (additive, subtractive, FM, etc.), or it can be open-ended and modular. In short, additive synthesis works by summing sine waves with varying frequencies, amplitudes and phases until the desired sound is attained. Additive synthesis is a computationally expensive synthesis method with a formidable amount of detail required for realistic sounds. Subtractive synthesis begins with a sound source rich in frequencies (such as a sawtooth wave or noise), then filters frequencies out until the desired sound has been sculpted from the original source. Subtractive synthesis is relatively easy to implement in hardware and software, and its sounds are characteristically associated with the analog synthesizers of the 1970s. FM (frequency modulation) synthesis works by shaping the frequency components of one oscillator by the output of another, creating complex audio spectra with little computational expense. Yamaha's DX7 synthesizer is the most famous FM implementation, and the company's OPL3 sound chip is certainly the most infamous.
Physical modelling and granular synthesis are two more recent synthesis methods. Physical modelling synthesis models the mechanics of a real or imaginary instrument and the physics of its activation. The method's parameters are based less on familiar sono-musical models, such as waveforms, frequencies and amplitudes, and more on the characteristics of physically excited systems, such as airflow through a tube, the vibrations of a plucked string or the radiating patterns of a struck membrane. Physical modelling has become a popular synthesis method and is deployed in synthesizers from Korg, Yamaha and others. Granular synthesis creates sounds by ordering sonic quanta or grains into more or less dense sonic masses. Again, its parameters are not so intuitive as in the older synthesis methods, but it is powerful and can create a wide range of sounds. Granular synthesis has yet to find its way into a popular commercial synthesizer, but hardware implementations are found in the Kyma system and the UPIC workstation.
A softsynth can be dedicated wholly to a single synthesis method, it can be a hybrid of two or more methods, or it can take a more open-ended modular design. Each architecture has its strengths. Broadly speaking, the modular design is perhaps the most flexible, but it may sacrifice fineness of control (resolution) for generality of purpose. A dedicated-method softsynth lacks the modular synth's flexibility but usually provides much finer parameter control.
Modular synthesizers encourage a building-block approach by providing separate synthesis primitives for connection in arbitrary ways. For example, an oscillator's output can be directed to the input of an envelope generator (EG) or vice versa, routing the EG's output to an oscillator input. This kind of black box networking lends itself to software emulation, as we'll see when we meet some modular synths later in this article.
The distinctions between the general types of software are blurring. For example, Csound is now available with a set of FLTK-based widgets for user-designed control panels. Many users already have created elaborate GUIs for Csound's various synthesis methods, some of which are detailed enough to function as standalone Csound-based softsynths. This trend is likely to continue with GUIs evolving for the Common LISP Music and RTCmix SWSS environments.
Graphic patching SWSS environments like jMax and Pd are another indicator of this blurring tendency. They also provide graphics widgets that can be used to construct synthesizer interfaces, but unlike Csound, these widgets are an integral aspect of the basic working environment. jMax and Pd utilize a unique combination of graphics and language primitives that are patched together by virtual wires to create a synthesis or processing network. These environments certainly can be employed as softsynths, but their generality of purpose places them closer to Csound than to the softsynths reviewed here.
Beatbox-style synths are yet another softsynth design category. These programs combine elements of a synthesizer, a drum machine and a sequencer for an all-in-one accompaniment package, though the more sophisticated examples are truly more flexible music composition systems.
These distinctions are brief, but for this article they suffice to indicate the basic types of softsynths. For complete definitions of the various synthesis methods and synthesizer architectures, see the standard references listed in Resources.
Similis sum folio de quo ludunt venti.
|Alice, the Turtle of the Modern Age||Mar 07, 2014|
|Using Django and MongoDB to Build a Blog||Mar 05, 2014|
|What virtualization solution do you use/manage at work?||Mar 04, 2014|
|Our Assignment||Mar 04, 2014|
|March 2014 Issue of Linux Journal: 20 Years of Linux Journal||Mar 03, 2014|
|Have Resume - Will Travel||Feb 28, 2014|
- Using Django and MongoDB to Build a Blog
- Technical Support Rep
- Senior Perl Developer
- Linux Systems Administrator
- UX Designer
- Alice, the Turtle of the Modern Age
- Zato—Agile ESB, SOA, REST and Cloud Integrations in Python
- Sign Up to Win a Silicon Mechanics Swag Pack!
- Our Assignment
- You have to be careful there
1 week 4 days ago
- Wonder when LJ is going to
1 week 5 days ago
- Puerto Rico Free Software
1 week 6 days ago
2 weeks 12 hours ago
- I hate voice commands
2 weeks 1 day ago
- usabilty --- AGAIN with this nonsense
2 weeks 1 day ago
- Don't make excuses
2 weeks 1 day ago
- Sorry to let you know, but
2 weeks 1 day ago
- Ridiculous statement. Not a
2 weeks 2 days ago
2 weeks 2 days ago