The Day the Earth Stood Still
The Day the Earth Stood Still is a re-invention of the 1951 science-fiction film classic. Keanu Reeves stars as the benevolent visiting alien Klaatu, come to Earth to warn us to change our barbaric ways or face destruction.
Ten years ago, Titanic was the first film to use Linux in a big way. Today, Linux dominates big-budget visual effects and 3-D animation. Ever since The Matrix, it's become routine to have several visual-effects companies working on the same film. A visual effects supervisor at the studio, in this case Fox, selects which companies will create the visual effects.
“I came in and met with the director Scott Derrickson”, says The Day the Earth Stood Still Visual Effects Supervisor Jeffrey A. Okun. “In Scott's opinion, and one I agree with, the day of visual effect as star of the movie is gone. He wanted to focus on story. He wanted spectacular effects that were invisible. When dealing with spaceships, aliens and giant robots, that's a bit of a challenge.”
“Weta was our primary group on the film that did 220 shots on the film”, says Okun. “Then Cinesite. We had Flash Filmworks and CosFX. Later on we added Hammerhead and Hydraulx, a company called At the Post, and a couple other little companies. Weta handled the Sphere, the alien, the robot and the Swarm. It's all particle systems based on chaos theory. That means it's render-intensive.”
“There's a shot of the Sphere that we call the super-sphere shot”, says Okun. “That starts in the swamp and takes you to various Spheres activating around the world. That took 30 days to render. That's pretty crazy. It's around 1,100 frames. It's an amazing shot. You don't want to show it to the director at the end of the day and have him say, 'That's not really our sphere'...which is what happened. We came up with a patch system at Weta Digital where we could render a section and patch it over the offending thing. This particular patch took three days to render.”
“Linux is an integral part of what we do here at Weta”, says Production Engineering Lead Peter Capelluto. “It's very well suited for the dynamic needs of the visual-effects industry. Our department would have a much more difficult time accomplishing our goals with any other operating system.”
“Weta predominantly uses Linux for our workstations and also for our renderfarm and servers”, says Capelluto. “There are a few applications that require the use of Mac OS X, Windows and Irix. Whenever possible, we use Linux. The open-source nature of Linux and the many Linux applications are a major advantage. We also prefer it for stability, low cost, access control, multiuser capabilities, control and flexibility.” Capelluto's department develops pipeline software, such as the digital asset management system and the distributed resource management system for their renderfarm.
“We have 500 IBM Blade Servers, 2,560 HP BL2x220C Blade Servers and 1,000 workstations”, says Weta Digital Systems Department Lead Adam Shand. “Ubuntu is our primary render and desktop distro. We also use CentOS, RHEL and Debian.” The workstations are IBM and HP. Weta uses NetApp DataOnTap, NetApp GX, BluArc, Panasas and SGI file servers. Storage is mostly NAS, not SAN. For open-source apps, they use Apache, Perl, Python, MySQL, PostgresSQL, Bind, OpenOffice.org, CUPS, OpenLDAP, Samba, Firefox, Thunderbird, Django, Cacti, Cricket, MRTG and Sun Gridware.
“We're big fans of open-source code here at Weta”, says Capelluto. “We're utilizing Sun's Grid Engine for distributed resource management and have helped them fix a number of bugs. It's very powerful to be able to improve upon open-source software and to fix any problems you encounter.”
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- Designing Electronics with Linux
- What's the tweeting protocol?
- Kernel Problem
2 hours 29 min ago
- BASH script to log IPs on public web server
6 hours 56 min ago
10 hours 31 min ago
- Reply to comment | Linux Journal
11 hours 4 min ago
- All the articles you talked
13 hours 27 min ago
- All the articles you talked
13 hours 30 min ago
- All the articles you talked
13 hours 32 min ago
17 hours 56 min ago
- Keeping track of IP address
19 hours 47 min ago
- Roll your own dynamic dns
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?