Linux Helps Bring Titanic to Life
Digital Domain is an advanced full-service production studio located in Venice, California. There, we generate visual effects for feature films and commercials as well as new media applications. Our feature film credits include Interview with the Vampire, True Lies, Apollo 13, Dante's Peak and The Fifth Element. Our commercial credits are challenging to count, much less list here (see the web site at http://www.d2.com/). While we are best known for the excellent technical quality of our work, we are also well respected for our creative contributions to our assignments.
The film Titanic (written and directed by James Cameron) opened in theaters December 19, 1997. Set on the Titanic during its first and final voyage across the Atlantic ocean, this tale had to be recreated on the screen in all the splendor and drama of both the ship and the tragedy. Digital Domain was selected to produce a large number of extraordinarily challenging visual effects for this demanding film.
Digital visual effects are a large portion of our work. For many digital effects shots, original photographic images are first shot on film (using conventional cinematic methods) and then scanned into the computer. Each “cut” or “scene” is set up as a collection of directories with an “element” directory for all the photographic passes that contribute to the final scene. Each frame of film is stored as a separate file on a central file server.
A digital artist then begins working on the shot. The work may involve creating whole new elements such as animating and rendering 3D models or modifying existing elements such as painting out a wire or isolating the areas of interest in the original film.
This work is done at the artist's desktop (often on an SGI or NT workstation). Once the setup for this work is done, the process is repeated for each frame of the shot. This batch processing is done on all the available CPUs in the facility, often in parallel and requires a distributed file system and uniform data overview. A goal of this processing is to remain platform independent whenever possible.
Finally, once all the elements are created, the final image is “composited”. During this step the individual elements are color corrected to match the original photography, spatially coordinated and layered to create the final image. Again, the set up for compositing work is usually done on a desktop SGI, and the batch processing is done throughout the facility.
Since building a full-scale model of the Titanic would have been prohibitively expensive, only a portion of the ship was built full size (by the production staff), and miniatures were used for the rest of the scenes. To this model we added other elements of the scene such as the ocean, people, birds, smoke and other details that make the model appear to be docked, sailing or sunk in the ocean. To this end, we built a 3D model and photographed 2D elements to simulate underwater, airborne and land-based photography.
During the work on Titanic the facility had approximately 350 SGI CPUs, 200 DEC Alpha CPUs and 5 terabytes of disk all connected by a 100Mbps or faster network.
Our objective is always to create the highest quality images within financial and schedule constraints. Image creation is accomplished in two phases. In the first phase, the digital artist works at an interactive workstation utilizing specific, sophisticated software packages and specific high-performance hardware. During the second phase the work is processed in batch mode on as many CPUs as possible, regardless of vintage, location or features to enhance interactive performance.
It is difficult to improve on that first, interactive phase. The digital artists require certain packages that are not always available on other platforms. Even if similar packages are available, there is a significant cost associated with interoperating between them.
Another problem is that some of the packages require certain high-end (often 3D) hardware acceleration. That same quality and performance of 3D acceleration may not be available on other platforms.
In the batch-processing phase, improvements are more easily found, since basic requirements are high-bandwidth computation, access to large storage and a fast network. If the appropriate applications are available, we can improve that part of the process. Even in cases where only a subset of the applications are available on a particular platform, using that platform gives us the ability to partition work flow to improve access to resources in general.
We rapidly concluded the DEC Alpha-based systems served our batch-processing needs very well. They provide extremely high floating-point performance in commodity packaging. We were able to identify certain floating-point-intensive applications as port targets. The Alpha systems could be configured with large amounts of memory and fast networking at extremely attractive price points. Overall, the DEC Alpha had the best price/performance match for our needs.
The next question was which operating system to use. We had the usual choices: Windows/NT, DEC UNIX and Linux. We knew which programs we needed to run on the systems, so we assembled systems of each type and proceeded to evaluate their suitability for the various tasks we needed to complete for this production.
Windows NT had several shortfalls. First, our standard applications, which normally run on SGI hardware, were not available under NT. Our software staff could port the tools, but that solution would be quite expensive. NT also had several other limitations; it didn't support an automounter, NFS or symbolic links, all of which are critical to our distributed storage architecture. There were third-party applications available to fill some of these holes, but they added to the cost and, in many cases, did not perform well in handling our general computing needs.
Digital UNIX performed very well and integrated nicely into our environment. The biggest limitations of Digital UNIX were cost and lack of flexibility. We would be purchasing and reconfiguring a large number of systems. Separately purchasing Digital UNIX for each system would have been time consuming and expensive. Digital UNIX also didn't have certain extensions we required and could not provide them in an acceptable time frame. For example, we needed to communicate with our NT-based file servers, connect two unusual varieties of tape drives and allow large numbers of users on a single system; none are supported by Digital UNIX.
Linux fulfilled the task very well. It handled every job we threw at it. During our testing phase, we used its ability to emulate Digital UNIX applications to benchmark standard applications and show that its performance would meet our needs. The flexibility of the existing devices and available source code gave Linux a definitive advantage.
The downside of Linux was the engineering effort required to support it. We knew that we would need to dedicate one engineer to support these systems during their set up. Fortunately, we had engineers with significant previous experience with Linux on Intel systems (the author and other members of the system-administration staff) and enough Unix-system experience to make any required modifications. We carefully tested a variety of hardware to make sure all were completely compatible with Linux.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Secure Desktops with Qubes: Introduction
- Fancy Tricks for Changing Numeric Base
- Seeing Red and Getting Sleep
- Working with Command Arguments
- Secure Desktops with Qubes: Installation
- CentOS 6.8 Released
- The Italian Army Switches to LibreOffice
- Linux Mint 18
- Google's Abacus Project: It's All about Trust
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide