Controlling Creatures with Linux
Motion and sound can be recorded and played back by a nonlinear multimedia editor called the Recorder, developed by Michael Babcock. The architecture, designed by Michael and Steve, consists of a multithreaded process networked to the Motion Engine via UDP. The Recorder is synchronized to the Motion Engine because it slaves off its output as an ROC client, yet the Recorder also streams stored data back to the Motion Engine for broadcasting to other ROC clients. This networked structure allows each process to have its own timing and I/O requirements, without interfering with the other, as in the Tool Server/Motion Engine relationship.
Because recorded motion can be cued and played back live, the puppeteer can layer a performance, as one would produce a multitrack audio recording. This is particularly useful for lip-sync scenarios, where the performance of a creature's mouth can be perfected off-line, then played back while the puppeteer performs the rest of the character live.
Dan Helfman contributed a sound recording facility to the SDL, the open-source multimedia API we use in the Recorder.
A module within the Motion Engine called the Link Supervisor can broadcast and manage connections with multiple ROC clients, regardless of their network type or implementation. The result is that one puppeteer can control multiple puppets in multiple mediums. For example, an animatronic cat can be performed at the same time as its computer graphics counterpart. While the body and face of the animatronic is captured on camera, a computer graphics mouth, performed simultaneously, can be viewed live on a monitor or even composited live with the camera tap image on set.
This allows each medium to do what it does best. We get the complex lighting and physics of a “real” creature on set, and CG mouth data can be further finessed in postproduction before compositing with the film plate. This live previsualization allows a director to direct truly the creature performance on set, while allowing actors to interact with their creature costars.
There is a purposeful division between the Motion Engine, the Tool Server, the GUI and the Recorder. Because the more complex multimedia and networking modules require software techniques that might compromise process timing or stability, an architecture was designed by Steve Rosenbluth and Tim McGill that builds a wall around the Motion Engine. The goal was for the Motion Engine to have a minimal amount of complexity so that it keeps running. The Tool Server, expected to grow large and complex, was allowed to go down and restart without affecting the Motion Engine. The architecture also allowed the GUI to come and go without negatively affecting either the Tool Server or Motion Engine, and likewise for the Recorder. To accomplish this, the system was segmented into process modules that communicate via UNIX IPC and networking.
The Tool Server and Motion Engine have a block of System V shared memory in common. This enables immediate updates of critical data objects. They also communicate via two FIFOs for messaging that is sequence-critical. There also are UDP network sockets between the Motion Engine and Recorder, which stream data in soft real time to each other. The Motion Engine is what we call a near-mission-critical application, in that its failure in the field could have negative consequences for us. On-set downtime can cost a film production company many thousands of dollars an hour. It's also the nature of the motion picture industry that actors and crew may be in close contact with the animatronic machinery. It would be a bad thing to have an animatronic dog bite an actor while a technician logs in and restarts applications. That is why there is no GUI or other unnecessary code in the Motion Engine. Given our near-mission-critical requirements, the stability of the Linux operating system itself is a big plus.
The independent process architecture also aided development by allowing individual programmers to write and test more modular, self-contained pieces of code. It gave developers the freedom to use custom, and sometimes cutting-edge, programming techniques safely that weren't necessary or appropriate for the other process modules.
|September 2015 Issue of Linux Journal: HOW-TOs||Sep 01, 2015|
|September 2015 Video Preview||Sep 01, 2015|
|Using tshark to Watch and Inspect Network Traffic||Aug 31, 2015|
|Where's That Pesky Hidden Word?||Aug 28, 2015|
|A Project to Guarantee Better Security for Open-Source Projects||Aug 27, 2015|
|Concerning Containers' Connections: on Docker Networking||Aug 26, 2015|
- Using tshark to Watch and Inspect Network Traffic
- September 2015 Issue of Linux Journal: HOW-TOs
- Concerning Containers' Connections: on Docker Networking
- Problems with Ubuntu's Software Center and How Canonical Plans to Fix Them
- Where's That Pesky Hidden Word?
- Firefox Security Exploit Targets Linux Users and Web Developers
- A Project to Guarantee Better Security for Open-Source Projects
- Build a “Virtual SuperComputer” with Process Virtualization
- My Network Go-Bag
- Doing Astronomy with Python