The Linux Softsynth Roundup
Software sound synthesis (SWSS) has an honorable lineage in the history of computers. Early experiments in digital sound synthesis took place at the famous Bell Labs, where a team of researchers centered around Max Mathews created the Music N series of audio synthesis software, culminating in Music V in 1969. Since that time, Music V has evolved into a series of notable digital sound synthesis environments, such as Csound, Cmix/RTCmix and Common LISP Music. These environments typically provide the user with a language for specifying the nature of sonic events, such as musical notes or sampled sounds. These languages usually present users with a distinction between instruments (the sound-producing designs) and scores (event characteristics, such as, start time, duration and synthesis parameters). Users compose their instruments and scores in their preferred SWSS language and then feed them to the language's compiler. Output is directed to a file, which then can be played by any sound system supporting the file format or, with sufficiently powerful hardware, the output can be directed to a digital-to-audio converter for rendering real-time audio output.
A standalone software synthesizer (softsynth) substitutes real-time control for the score aspect of the model above. Softsynths typically come with attractive GUIs, often emulating the appearance and operation of a hardware synthesizer, and a MIDI keyboard or external sequencer is the expected controller. Under the right circumstances, a softsynth can be controlled by a concurrent process. For example, using the ALSA aconnect utility, a softsynth can be wired to a MIDI sequencer running on the same machine. Then, sequences can be recorded and played via the softsynth, eliminating the need for an external synthesizer and containing the MIDI environment on a single computer.
A softsynth can be dedicated to a particular synthesis method (additive, subtractive, FM, etc.), or it can be open-ended and modular. In short, additive synthesis works by summing sine waves with varying frequencies, amplitudes and phases until the desired sound is attained. Additive synthesis is a computationally expensive synthesis method with a formidable amount of detail required for realistic sounds. Subtractive synthesis begins with a sound source rich in frequencies (such as a sawtooth wave or noise), then filters frequencies out until the desired sound has been sculpted from the original source. Subtractive synthesis is relatively easy to implement in hardware and software, and its sounds are characteristically associated with the analog synthesizers of the 1970s. FM (frequency modulation) synthesis works by shaping the frequency components of one oscillator by the output of another, creating complex audio spectra with little computational expense. Yamaha's DX7 synthesizer is the most famous FM implementation, and the company's OPL3 sound chip is certainly the most infamous.
Physical modelling and granular synthesis are two more recent synthesis methods. Physical modelling synthesis models the mechanics of a real or imaginary instrument and the physics of its activation. The method's parameters are based less on familiar sono-musical models, such as waveforms, frequencies and amplitudes, and more on the characteristics of physically excited systems, such as airflow through a tube, the vibrations of a plucked string or the radiating patterns of a struck membrane. Physical modelling has become a popular synthesis method and is deployed in synthesizers from Korg, Yamaha and others. Granular synthesis creates sounds by ordering sonic quanta or grains into more or less dense sonic masses. Again, its parameters are not so intuitive as in the older synthesis methods, but it is powerful and can create a wide range of sounds. Granular synthesis has yet to find its way into a popular commercial synthesizer, but hardware implementations are found in the Kyma system and the UPIC workstation.
A softsynth can be dedicated wholly to a single synthesis method, it can be a hybrid of two or more methods, or it can take a more open-ended modular design. Each architecture has its strengths. Broadly speaking, the modular design is perhaps the most flexible, but it may sacrifice fineness of control (resolution) for generality of purpose. A dedicated-method softsynth lacks the modular synth's flexibility but usually provides much finer parameter control.
Modular synthesizers encourage a building-block approach by providing separate synthesis primitives for connection in arbitrary ways. For example, an oscillator's output can be directed to the input of an envelope generator (EG) or vice versa, routing the EG's output to an oscillator input. This kind of black box networking lends itself to software emulation, as we'll see when we meet some modular synths later in this article.
The distinctions between the general types of software are blurring. For example, Csound is now available with a set of FLTK-based widgets for user-designed control panels. Many users already have created elaborate GUIs for Csound's various synthesis methods, some of which are detailed enough to function as standalone Csound-based softsynths. This trend is likely to continue with GUIs evolving for the Common LISP Music and RTCmix SWSS environments.
Graphic patching SWSS environments like jMax and Pd are another indicator of this blurring tendency. They also provide graphics widgets that can be used to construct synthesizer interfaces, but unlike Csound, these widgets are an integral aspect of the basic working environment. jMax and Pd utilize a unique combination of graphics and language primitives that are patched together by virtual wires to create a synthesis or processing network. These environments certainly can be employed as softsynths, but their generality of purpose places them closer to Csound than to the softsynths reviewed here.
Beatbox-style synths are yet another softsynth design category. These programs combine elements of a synthesizer, a drum machine and a sequencer for an all-in-one accompaniment package, though the more sophisticated examples are truly more flexible music composition systems.
These distinctions are brief, but for this article they suffice to indicate the basic types of softsynths. For complete definitions of the various synthesis methods and synthesizer architectures, see the standard references listed in Resources.
Similis sum folio de quo ludunt venti.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Android's Limits
- Weechat, Irssi's Little Brother
- Reply to comment | Linux Journal
2 min 41 sec ago
- Yeah, user namespaces are
1 hour 19 min ago
- Cari Uang
4 hours 50 min ago
- user namespaces
7 hours 43 min ago
8 hours 9 min ago
- One advantage with VMs
10 hours 38 min ago
- about info
11 hours 11 min ago
11 hours 12 min ago
11 hours 13 min ago
11 hours 15 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?