DreamWorks Animation "Shrek the Third": Linux Feeds an Ogre
“Global illumination is something lighters everywhere love”, says Pearce. “That bounces the light off all the surfaces.” The challenge is global illumination, often implemented as ray tracing, can be expensive. “It's brilliant for architecture fly-throughs where the lighting is static and baked onto the building”, says Pearce. “Our problem is everything in Shrek 3 is moving.” The DreamWorks software “bakes” (calculates ahead of time) what's not moving and then tries to do the parts that are moving efficiently. “Global illumination is in almost every scene in Shrek 3”, says Pearce. “Scene complexity is what caps us now, such as forests. Ray tracing there is not reasonable.” Advances in Linux computing power and multicore chips and software are pushing render capability higher and higher.
“We use global illumination like a DP does”, says Art Department Production Designer Guillaume Aretos. “We use bounce cards and colored bounce cards. The moviemaking process for us is very close to live action where we use the light that bounces. Shrek 2 was a south of Italy feel, kind of Beverly Hills turned into Italy.” Shrek 3 moves away from the eternal spring of first two movies, with a more Northern European look. “It's very hard to get an overcast effect with bouncing backlight”, says Aretos, “and bright light shining through dark clouds.”
“I'm the head of layout”, says Nick Walker, “which makes me the DP.” Layout is the group that figures out where the virtual actors will stand in virtual sets. “Shrek is seven feet tall and all torso”, says Walker. “Shrek could swallow Fiona's head when moving in for a kiss. Puss n' Boots and Donkey, the two sidekicks, are different sizes. It's difficult to get a two-shot.”
The production moves from story and concept artwork into 3-D modeling and eventually render. DreamWorks Animation uses the popular Linux Maya commercial package for 3-D modeling. Layout positions the characters in the scenes and determines overall lighting. Models are “rigged” with internal skeletons by the Character TDs, then given to the scene animators. Because of the complexity, Shrek 3 animators were assigned in pairs to each of the hundreds of scenes. In the past, it was one animator per scene. Lighting and any special effects are added, such as cloth or flames. Then, the scene is rendered frame by frame on a 3,000+ CPU Linux renderfarm.
Each frame is assigned to a different node of the renderfarm by grid software (using Platform LSF, a commercial Linux package), so that many frames can be output simultaneously. The frames are edited into a movie using Avid software (not on Linux). Early in the process, hand-drawn storyboard images are scanned, and a scratch audio track is edited together creating a rough video representation of the movie. As each sequence is completed, it replaces the rough storyboard footage, building the fully rendered movie scene by scene.
“The fact that our main production pipeline is all on Linux is pretty interesting”, says head of Production Technology Darin Grant. “We've been on Linux for years, but I'm still amazed. Looking back, when using Linux was a radical concept for Digital Domain's renderfarm during Titanic, it wasn't that long till the industry moved toward wholehearted adoption.” The studio Digital Domain, where Grant formerly worked, built the first Linux renderfarm for Titanic (released in 1997). Grant now works at the PDI DreamWorks facility in northern California and commutes to Glendale each week to ensure that his cross-site team is in sync. He also uses VSC, an immersive video teleconferencing system developed by DreamWorks Animation that HP has since taken to market as the Halo room. HAVEN is the DS3 45mbs Halo video exchange network, used extensively to connect between DreamWorks facilities and with HP's desktop division and with AMD.
“The issues with maintaining a large Linux-based pipeline are the same as maintaining a large pipeline on any operating system”, says Grant. “We unified the studio on one standard pipeline a while ago, and now we have all productions at all times using the same pipeline. They stress, push and develop the pipeline in different ways on each production. Linux provides us with many advantages. Solid support for threading, NFS and LAMP toolsets are big pluses for us. Managing developers gets easier each year as the quality of the development tools and IDEs available on Linux improves.”
“At the desktop, we use HP xw9300 workstations running RHEL 4”, says head of Digital Operations Derek Chan. “The renderfarm uses HP DL145 G2 servers. Our standard for memory is to have 2GB per core. Servers have four cores, so that's 8GB.” Chan says DreamWorks Animation has a good relationship with Red Hat, working closely to ensure that HP workstations work with Red Hat Linux for DreamWorks.
“A challenge we overcame on Shrek 3 was the integration of metadata into our pipeline”, says Grant. “In a cross-team effort between R&D and Production Technology, we put in place a system that maintains historical version information, render statistics and other really valuable data in each and every file we produce. That we were able to do this shows one of the key advantages of having a proprietary toolset and file formats.” DreamWorks uses its own TIFF-like file format based on 16-bit binary fixed point, a limited High Dynamic Range (HDR) image format with a color range of 0 to 2.0. Letting images go whiter than white leaves headroom for image adjustments.
DreamWorks Animation technology is organized into three core groups: R&D to create new technology, Production Technology that oversees the production pipeline and Digital Operations that's responsible for the compute, network and storage infrastructure. The production pipeline has hundreds of small tools and applets that form the other glueware, which enables 200 people to work as an orchestrated team. Most legacy pipeline code is written in Perl, and most of the newer code is being written in Python. “We'd love to be all Python”, says Leonard, “but today we still have lots of Perl”.
“Our team has been spearheading the transition from Perl to Python at the facility”, says Grant. “There are three primary reasons for this. The creation of Python bindings to a C++ library is very easy and allows us to utilize core R&D libraries in the rest of the pipeline more quickly. The object-oriented nature of Python is very attractive given our new asset model and should allow us to make changes to that asset model much more easily in the future. And, Python is a first-class citizen in many of the third-party software applications that are used in our industry.”
At the Linux Movies Conference, an all-day conference for motion picture technologists, last held in 2005 [that I chaired], the consensus of studio technologists was that the constraint on renderfarm size was heat. “The wall is still heat and power”, says Chan, “and a little bit of floor space”. DreamWorks Animation is into dual-core and about to go quad-core. “In the next eight months, we'll switch to eight computing cores per desktop”, says Chan. The studio is interested in accelerated computing with GP-GPU. That has the potential to move render processes from overnight to interactive. But, there are daunting technical barriers in that GPU programming is so alien, and there are bandwidth limitations going between CPUs and GPUs on graphics cards. “There are significant performance gains to be had”, says Chan, “especially if we could keep it all on die with an AMD-integrated GPU”.
Shrek 3 consumes 24TB of storage, out of an allocation of 30TB. DreamWorks likes to keep all its movies in near-line storage on big arrays of spinning disks. “People refer to previous movies all the time”, says Chan. “We have the three Shrek movies. Madagascar is getting a sequel.” Although everything is pretty much kept on-line, DreamWorks archives to tape sent off-site for disaster recovery.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Reply to comment | Linux Journal
29 min 8 sec ago
- Nice article, thanks for the
11 hours 9 min ago
- I once had a better way I
16 hours 55 min ago
- Not only you I too assumed
17 hours 12 min ago
- another very interesting
19 hours 6 min ago
- Reply to comment | Linux Journal
20 hours 59 min ago
- Reply to comment | Linux Journal
1 day 3 hours ago
- Reply to comment | Linux Journal
1 day 4 hours ago
- Favorite (and easily brute-forced) pw's
1 day 6 hours ago
- Have you tried Boxen? It's a
1 day 11 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?