An Introduction To OSC

At the end of my profile of AlgoScore I stated that my one wished-for addition to that program would be support for OpenSound Control (OSC). Well, my wish has been granted, the latest AlgoScore supports OSC, and I'm a happy guy. This article introduces OSC and explains why it makes me a more pleasant fellow.

A Little History

The history of OSC begins with the history of MIDI. When the major hardware synthesizer manufacturers adopted MIDI as a standard for interdevice communications it was widely and justly hailed as a breakthrough in music technology. Armed with a computer, the appropriate software, and a few synthesizers a single musician could write, record, and produce an entire piece with no other assistance. MIDI revolutionized the music industry, and its continued use is a good measure of the success of the standard. However, MIDI is far from perfect, and many musical purposes are not served well or at all by MIDI software and hardware. As a result, alternative protocols have been advanced.

In 1994 the ZIPI protocol was announced and described in the Computer Music Journal. ZIPI addressed many of MIDI's shortfalls, but the specification failed to attract significant interest, commercial or otherwise. Fortunately, ZIPI's designers were undismayed by this experience, and they continued to investigate possible alternatives to MIDI.

In 1997 ZIPI developers Matt Wright and Adrian Freed unveiled the OpenSound Control protocol, better known simply as OSC. OSC is a good example of a modern network data transport control system. Such a system defines the types of data it carries and manages streams of those data types. Like other transport protocols OSC enables communication between computers and other media devices, and of course OSC also allows communication between programs running on the same machine. OSC has been designed for musical purposes, but it is certainly capable of serving in other capacities.

OSC Features

The OSC Web sites offer this list of the protocol's attractions :

  • Open-ended, dynamic, URL-style symbolic naming scheme.
  • Symbolic and high-resolution numeric argument data.
  • Pattern matching language to specify multiple recipients of a single message.
  • High resolution time tags.
  • "Bundles" of messages whose effects must occur simultaneously.
  • Query system to dynamically find out the capabilities of an OSC server and get documentation.
  • Network-based system utilizes common UDP/TCP transport mechanisms.

These features may be interesting in and of themselves, but how do they rate as attractions for the computer-based musician ? To answer that question we need to reconsider MIDI, particularly its drawbacks.

As I implied earlier MIDI has been a mixed blessing for composers. It is certainly an empowering technology, but its capabilities have well-defined limits. Sometimes those limits are merely inconvenient, while at other times they can be fatal to a project. Some of MIDI's more frustrating limitations include :

  • Serial transport mechanism - Data must be ordered sequentially, problematic with heavy data streams.
  • Slow transmission rate - Again problematic with heavy loads.
  • Integer representation of pitch - Ignores other feasible representations.
  • Bias towards 12-tone equal temperament - See above.
  • Bias towards keyboard controllers - Difficult to implement on wind instruments, guitars, other instruments.
  • Integer representation of controller values - Results in insufficient granularity during controller movement.
  • Insufficient timing resolution - Again an integer representation.
  • Requires special hardware - Needed for external connections.

OSC's design addresses and resolves MIDI's most frustrating aspects, particularly regarding transport speed and the assumptions of use. No restriction is placed on pitch representation or any other musical representation, as long as the data format is supported by the protocol. Supported data formats may include integer, float, doubles, and other types, a far more flexible scheme than MIDI permits. Transmission rates take place at network speeds that are much faster than MIDI, and unwanted delays are mitigated by OSC's support for message bundles. The protocol makes no assumptions about its target devices, and no special hardware is required beyond typical network interfaces.

One word more regarding MIDI. Lest it seem that I'm not a fan of the protocol, I must say that I love what MIDI has done for my own career as a musician. For its intended purposes MIDI is a remarkable achievement, and for certain kinds of composition MIDI remains my environment of choice. However, once the composer leaves the world of steady beats and equal temperament MIDI quickly reveals its limits. MIDI control of an audio synthesis environment such as Csound or ChucK works up to a point, but those systems permit far more flexible approaches to pitch and rhythm than MIDI can address. OSC provides a much better solution for controlling synthesis parameters in such environments.

Development Tools

Notable library implementations include liblo, Steve Harris's lightweight OSC library, and oscpack, Ross Bencina's library of C++ classes for handling OSC message streams. Other language-specific implementations include the Net::OpenSoundControl Perl module, SimpleOSC for Python, and JavaOSC.

The Disposable SoftSynth Interface (DSSI) audio processing plugin API employs OSC for communications between the plugin's internals and its user interface. According to the DSSI Web site, OSC "... ensures that the plugin's controls are consistently externally available, provides a guarantee of automatability, and encourages clean plugin structure."

By the way, if you know of other OSC development tools you think I should mention, please add them to the Comments at the end of this article.

Using OSC

Now that we know what OSC is and what it does, let's take a look at a practical example. Long-time readers may remember that I've been enamored with Jean-Pierre Lemoine's AVSynthesis, a fascinating program that unites the capabilities of Csound and OpenGL to create fantastic audio and video displays. AVSynthesis has supported MIDI for a while, and OSC support was added recently. Either system can be used to control almost every parameter of the program, but OSC provides the finer-grained resolution needed for smooth realtime results. Jean-Pierre has added an external OSC control panel to the AVSynthesis package (Figure 1), a very helpful addition that maps OSC messages to any parameter declared under MIDI control. Alas, that panel's sliders can not be automated, so its utility is restricted to editing a single parameter at a time. A more interesting scenario would involve a programmable OSC messenger driving AVSynthesis parameters, and thanks to AlgoScore's new OSC support I can realize that scenario.

Figure 1: The OSC control panel for AVSynthesis

Figure 2 shows off AlgoScore configured to send five streams of OSC messages to AVSynthesis. Each stream is created by a line-segment envelope, a modulated sine wave, or an OSC event generator, and all those objects are connected to AlgoScore's osc_bus object. The OSC bus is defined to handle five message types, four of which are intended to control various audio/visual parameters. The remaining type controls the volume for the specified layer (i.e. a track in the AVSynthesis main display). When I start AVSynthesis in its OSC reception mode nothing happens in that program until I start AlgoScore, at which time all parameters in AVSynthesis under OSC control will be updated by the message streams from AlgoScore's osc_bus.

Figure 2: AlgoScore as an OSC message generator

Let's consider the setup for both programs in some detail. To configure AlgoScore as an OSC message generator I followed these steps :

  1. Open a new project in AlgoScore.
  2. Define performance score length.
  3. Add an osc_bus object.
  4. Set osc_bus properties.
  5. Add linseg and other control objects.
  6. Set their properties.
  7. Create event generator with datagen object.
  8. Add Nasal code to create events.
  9. Connect objects to the OSC bus.

My previous article about AlgoScore provides further details about that program and its usage.

This scenario requires no external connecting utility a la QJackCtl or Patchage. AlgoScore and AVSynthesis are configured to agree on the OSC host and port number (see below), and that's all that's needed to define their connection. More connections from other programs could be made, but I leave that exercise to the reader.

As the data generator AlgoScore must configure its OSC messages to the formats required by the receiver. AVSynthesis supports these two message types :



where N is either the layer number or the number of a MIDI continuous controller assigned to a parameter or parameters in AVSynthesis. These messages are further defined by AVSynthesis to expect data in floating point numbers between 0 and 1. Their full declaration in AlgoScore's osc_bus Properties dialog (Figure 3) requires the data type where the entire message follows this model :

	{foo:['/foo', 'f']}

The syntax for this message is label:['/address', 'data_format']. In this example 'f' indicates a floating point data type (the default), but the value may be any one of the types supported by AlgoScore's OSC implementation.

Figure 3: AlgoScore's osc_bus Properties dialog

I wanted AlgoScore's OSC bus to control the layer 1 volume in AVSynthesis and any parameters assigned to MIDI continuous controller #1, so I entered this declaration in the osc_bus object's Properties dialog :

	{layer1:['/AVS/layer1', 'f'], ctrl1:['/AVS/p1', 'f']}

When I connected an output object to the bus AlgoScore asked whether I want the connection made for layer1 or ctrl1 (i.e. the message label). According to the connection, the data sent to the bus will go to its address in AVSynthesis.

My configuration file (data/config.xml) for AVSynthesis includes these settings for the OSC host machine's name and OSC port :

	OSCHost="localhost" OSCPort="7770"

These values are identical to the settings in AlgoScore's osc_bus Properties dialog :


Figure 4 displays the entire system with JACK, AlgoScore, and AVSynthesis operating in sweet harmony.

Figure 4: AlgoScore drives AVSynthesis

Performance Factors

Saturation of the OSC bus occurred with multiple streams at the default resolution (0.02 sec = 50 Hz), resulting in audio discontinuity in AVSynthesis. Performance was better with a resolution of 0.2 seconds, though some audio glitching remained. The next version of AlgoScore will provide improved timing capability, and I look forward to testing it in the combined environment described here. Such an environment necessarily wants CPU power, but I confess that I have only begun to explore how to tune OSC's performance.

Support In Linux Audio Software

Notable applications with OSC support include the Ardour audio workstation and Dave Griffith's Fluxus audio/visual live-coding system. Support is particularly strong in audio processing environments, with implementations in ChucK, Csound, Pd, Squeak, and SuperCollider3.

Learning More

Primary sources for information about OSC include and the CNMAT OSC page. The Wikipedia page on OSC provides a good summary of the project and a list of links to various OSC-aware programs, and a Google search for "Open Sound Control" yields the predictably enormous number of hits relevant and otherwise.


For some musical purposes MIDI's capabilities are quite sufficient. However, other musical purposes are served not so well by MIDI, and its limitations can restrict expressive possibilities. OSC effectively removes those limitations. I'd like to see OSC implemented as widely as MIDI, so if you're an audio applications developer I hope you'll consider adding OSC support to your software.

Coming up: More new releases, updates, interviews, reports, profiles, reviews, and ramblings.

Load Disqus comments