Linux and Scooby-Doo
Scooby-Doo, the computer-generated dog in the Warner Brothers film of the same name, was created using Linux. Scooby-Doo was released on June 14, 2002 and stars Sarah Michelle Gellar, from the popular TV show, Buffy the Vampire Slayer. Live footage for the film was shot in Australia, and the Scooby-Doo character was added electronically later.
Animators at the Los Angeles post-production studio Rhythm & Hues used Maya, Houdini, Film GIMP and proprietary Linux-based tools. "We utilized about a hundred Linux desktops to create Scooby-Doo", says Technology VP Mark Brown. "My biggest problem was all the animators yelling at me for more Linux boxes."
Film GIMP is the motion picture version of the popular open-source GIMP image editing program. Scooby-Doo was in production at the time I visited the studio for my article, "Film GIMP at Rhythm & Hues", which appeared in the March issue of Linux Journal. Both a developer and a user of Film GIMP, Rhythm & Hues keeps a few Windows and Mac OS X machines around, mainly for compatibility with Adobe Photoshop.
After the article appeared, some readers asked why Photoshop is being used rather than the GIMP. Film GIMP developer Caroline Dahllöf, a programmer at Rhythm & Hues, "Photoshop handles more layers with big images better". Matte painting artists at Rhythm & Hues create large backgrounds with perhaps forty layers and use a lot of specialized plugins. Working on single large images is quite different from the typical Film GIMP tasks of retouching film frames to remove dust or wire rigs. To get rid of Photoshop completely would require investing a lot of developer resources.
"I really wish that there would be an official effort and that I had more time to contribute", says Dahllöf. "Right now we're really busy, but I hope to have more time for Film GIMP this summer".
I myself am joining the project. The first things I want to accomplish are updating the Film GIMP web site and providing a source tarball so it isn't necessary to check out Film GIMP from CVS.
Film GIMP development perhaps has a renewed urgency because Apple recently acquired not only Nothing Real's Shake (see my May article in Linux Journal, "Tippett Studio and Nothing Real's Shake") but also Silicon Grail's RAYZ. Film GIMP, Shake and RAYZ are the three available Linux compositors; all the other Linux-based compositors are proprietary, internal to the studios that developed them.
Steve Jobs reportedly visited motion picture studios months back and took copious notes about to how best position Apple in the motion picture business. Buying Shake brings Apple the leading commercial film compositor, and in buying RAYZ, it has acquired the most significant Linux challenger. Apple stated that they intend to continue Linux support for at least one more version of Shake, but users worry that Apple seems lukewarm in their support for Linux. More Linux compositors, however, are on the horizon.
At the National Association of Broadcasters (NAB) convention in April, Discreet showed their Combustion product ported to Linux, though not yet released. Digital Domain says it may release NUKE, its proprietary compositor that has won two Scientific and Technical Achievement Academy Awards. ILM also has a highly regarded Linux compositor called CompTime (described in my July 2002 Linux Journal article, "Industrial Light & Magic"), but there are no plans to release it. A source at Adobe says there also are no plans to port After Effects to Linux, but they did release Adobe Acrobat for Linux in May without fanfare.
Rhythm & Hues has 125 Linux desktops and 300 SGI machines. Brown expects to complete the phasing out of SGI desktops by the middle of 2003. "Those doing the heaviest work are using Linux for performance", says Brown. "Productivity using Linux is through the ceiling. Interactively, Linux is five to six times faster than the SGI workstations being replaced".
"Our desktops are all dual-processor rackmounts, split 50-50 between P3s and Athlon MP 1800+", said Brown. Animator desktop machines are remote rackmounts kept in the machine room, connected with Cybex KVM extenders. Brown says that 3U racks were chosen to avoid any weird AGP risers. The graphics cards are ATI FireGL 2. "We're looking at FireGL 8800 Radeon cards", notes Brown, "but the drivers are not ready yet." They like the FireGL 2 cards because of the overlay planes (which work well with their software) and because they are good at manipulating heavy, complex 3-D geometries. Their machines use single monitors, not dual head.
The renderfarm, where the individual motion picture frames are computed, has 150 dual Pentium 1Ghz and 60 dual Athlon MP 1800+ machines. "AMD chips scream for our applications", says Brown. "I can't tell you how impressed I am. An Athlon MP 1800+ gives about the same performance as a 2.2G Pentium Xeon but at a third of the price, if that." The render PCs all have separate IP addresses. Rhythm & Hues uses its own custom queue for batch control, which also uses the desktop machines as render nodes during their idle cycles.
"We've ported our software and have all that running on Linux", reports Brown. "We're using Red Hat 7.2 and XFree86 4.1. We use the ATI OpenGL libraries, the SGI GLU libraries and the Mesa 3.4.2 GLUT." Mesa recommends the SGI GLU library version 1.3 over its own 1.2 implementation because SGI's is more up-to-date and reliable. Brown created scripts to switch between various library permutations for testing and benchmarking.
"Linux is stabilizing for us", says Brown. "For instance, normal operations are fine, but the Thunder K7 Tyan AGP 4x motherboard will wedge in our fire-hose tests". Brown says they probably will switch to ASUS or Gigabyte motherboards. Blue Arc, Network Appliance and a custom Sun box are the backend NFS servers. "You just can't serve terabytes of data off a Linux box now", states Brown. "Throughput is about a third of what it should be".
Rhythm & Hues chose Angstrom for their rackmount PCs. "Angstrom does a good engineering job and has a good team", says Brown. "They did well with the burn-ins, and their prices are good. We get monster machines for $2,500. If I had the money, I'd throw out every SGI now and get Athlons".
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- RSS Feeds
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- So when they found it hard to
36 min 15 sec ago
58 min 26 sec ago
- Reply to comment | Linux Journal
1 hour 20 min ago
- Android has been dominating
1 hour 25 min ago
- It is quiet helping
4 hours 11 min ago
4 hours 28 min ago
- Reachli - Amplifying your
5 hours 44 min ago
6 hours 33 min ago
- good point!
6 hours 36 min ago
- Varnish works!
6 hours 45 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?