AVSynthesis: Blending Light and Sound with OpenGL and Csound5

Introducing a unique and powerful program for mixing son et lumière into fascinating experimental videos.
The Composition Editor, Part 2

Before doing anything else, save your performance and all its parts with the Save Part/Performance button (Figure 2). Up to ten performances can be saved, each with ten parts, with up to 13 layers per part. For now, just save your work to its starting location (for example, Performance 0, Part 3).

Your track is represented now by its layer's blended image. Next, we need to add a performance curve in the track timeline. Left-click near the top of track section to set a peak for the curve, near the bottom for a zero value. The envelope curve offers only fixed-length attack and decay segments, but you can click and drag to set arbitrary lengths for peak and zero-value segments (Figure 1). Okay, we've defined our visual and audio elements and their transformations, we've set a performance curve in the composition timeline, so we're ready to put AVSynthesis into one of its performance modes.

The square buttons at the bottom right of the Composition screen represent the program's three performance modes. The right-most button turns on the rendering mode, the center square puts AVSynthesis into a MIDI-controlled mode, and the left button toggles the real-time performance mode.

The real-time mode plays the arrangement of layers and their associated curves on the composition screen timeline. Click the button, and your composition plays in real time. Click anywhere in the composition screen to stop playback. If an error occurs, AVSynthesis may print some relevant information to your terminal window, or it may run with no display or sound until you click to stop playback. Or, it may freak out entirely and freeze your system. As I said, it's experimental software, so these things happen.

When the MIDI performance mode is selected, the MIDI continuous controller #85 can be used as a layer fader during real-time performance from the composition screen. The input port is designated by the Csound options specified in the AVSynthesis config.xml file. In my example above, the -M0 option sets the input port to the ALSA MIDI Thru port.

I tested MIDI control by hooking a sequencer to the MIDI Thru port in QJackCtl's MIDI Connections panel. I used loops of sequential and random values for controller #85, and everything worked perfectly. The implementation is limited, but it points the way toward more interesting real-time performance controls, such as layer blackouts and sudden appearances. This MIDI control extends only to the video part of a layer; it does not affect the audio portion.

The rendering mode runs the arrangement in the Composition screen in slower than real time to produce one TGA image file per video frame. The frame rate is set in the data/config.xml file (see above), and the author advises leaving it at its default of 30 frames per second. Thus, at the default frame rate, 30 image files will be created for each second of your composition. These files can be compiled into an animation (see below). At the same time, Csound's output is captured to a soundfile (render.wav in the data directory) that can be added to the animation.

For some reason, the render mode works only once per session. If you want to record another take, save your work and re-open the program. Hopefully, this limitation will be removed in a future version.

Incidentally, the Fullscreen, Save Perf/Part, Realtime Performance and MIDI Mode buttons are available from all screens within AVSynthesis.

Making a Movie

AVSynthesis does not create a movie directly. When you click on the Render button, the program creates a series of uniformly sized image files (approximately 4MB each), and the number of files can be massive. You will need a video encoding program to turn these static images into a flowing animation. The following instructions use MEncoder from the MPlayer Project, but any other video encoder should work, as long as it's capable of converting static TGA images into a movie.

The first step sorts the TGA files into a numbered list. This step is necessary if your encoder reads the TGA files in this order: 1.tga, 10.tga, 100.tga, 1000.tga, 1001.tga...101.tga, 1010.tga, 1011.tga and so on.

Encoding the files in that order results in images rendered out of their original sequence. We need to encode them in this order: 1.tga, 2.tga, 3.tga, 4.tga and so on.

I asked the mavens on the Linux Audio Users mailing list how they would resolve this irritating dilemma. Various solutions were proposed, and the most appealing of which was this elegant fix from Wolfgang Woehl:

cd data/render
find *tga | sort -n > list

The list file can then be processed by MEncoder.

As I mentioned, the Csound audio output is saved in a separate audio file named render.wav in the AVSynthesis data directory. By default, this file is a 16-bit stereo WAV file with a sampling rate of 44.1kHz—that is, a CD-quality soundfile. It needs no special attention unless you want to rename it.

Now, we're ready to encode our images and soundfiles. Given the potentially large number of TGA images, the encoder is likely to produce a very large video file, and even a relatively short animation can devour dozens of gigabytes of storage. We need to consider a compression scheme to reduce the file size.

I discovered two ways of using MEncoder to create a compressed AVI from my audio and video data. The first way uses a multipass method:

mencoder -ovc lavc -lavcopts vcodec=huffyuv:pred=2:format
↪=422P:vstrict=-1 -noskip -mf fps=30 -o master.avi mf://@list
mencoder -ovc lavc -lavcopts vcodec=mpeg4:vme=1:keyint
↪=25:vbitrate=1000:vpass=1 -noskip -o foo.avi master.avi
mencoder -oac copy -audiofile ../render.wav -ovc lavc -lavcopts 
 ↪vcodec=mpeg4:vme=1:keyint=25:vbitrate=1000:vpass=2 
 ↪-noskip -o foo.avi master.avi

The first step creates a huge master file, which is then treated to a two-pass reduction scheme that adds the audio data in the second pass.

This single-pass method also creates a large file, but it has the advantage of faster production:

mencoder -oac copy -audiofile ../render.wav -ovc lavc 
 ↪-lavcopts vcodec=mpeg4:vme=1:keyint=30:vbitrate=1000 
 ↪-vf scale=800:600 -noskip -mf type=tga:fps=30 -o 
 ↪avs-001.avi mf://@list

As presented, this method sets the movie display size to 800x600. The scale parameter also can be included in either the second or third steps in the multipass example, and may in fact be necessary if your system complains about creating a large-sized movie.

I've placed three example AVIs on-line at linux-sound.org/avs-examples. Each animation demonstrates some of the effects possible with a single GL shader (for example, wobble.avi), the simplest Csound audio setup (one synth, one signal processor), and the (mostly) default values for the sequencer. Alas, the compressed videos can only hint at the visual beauty of AVSynthesis performing in real time, and they are offered merely as glimpses of the program's artistic potential.

______________________

Similis sum folio de quo ludunt venti.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

thanks

becks77's picture

thanks dave, you save me many hours of work with a clean doc on installation, usage, and rendering... i love avsynthesis

update 5/02

Dave Phillips's picture

AVS has gone through a series of updates since I wrote this article. Please see the AVS Web page for more information. Significant improvements include randomization controls, a version for JOGL (intended to replace the LWGL dependencies), and better image-size support.

Similis sum folio de quo ludunt venti.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState