Industry of Change: Linux Storms Hollywood
Before the summer of 2001, Linux supporters often pointed to any of a number of single-company deployments as a measure of success for the fledgling operating system. There was Burlington Northern, which committed in February 1999 to deploy Linux in 250 US stores. That was followed by Japan's Lawson, which struck a deal with IBM to supply that convenience store retailer with 15,000 IBM Linux-based eServers running on Red Hat software. Ford announced a plan where they would deploy 33,000 Linux desktops. These were big wins for the open-source faithful. But they were corporate waves in a sea of change. What Linux needed was a tidal wave--an industry-wide migration--to signal that the penguin had come of age.
Enter the visual effects industry, the collection of studios that produce special effects, or VFX in industry parlance, for movies and animated tales like Toy Story and Shrek. This is an industry ripe for change, an industry struggling to shake the bondage of single-vendor solutions and high-priced specialized hardware. It's also an industry that tested the waters of Windows and found it flowing in the wrong direction.
This isn't a story about one or two studios adopting Linux as servers in their renderfarms, those back rooms full of servers used to produce the individual sets of frames used in a movie. We're talking about the entire industry--from Rhythm & Hues to Pixar, from Digital Domain to DreamWorks. DreamWorks-PDI had over 2,000 Linux-based CPUs on-line by the summer of 2001. Their summer blockbuster Shrek was rendered on 1,000+ mostly Linux machines (see GFX: "DreamWorks Feature Linux and Animation", August 2001 issue of LJ). Pixar has only deployed 15 stations in production and 25 in software development, but VP of Technology Darwin Peachey says the studio is on the verge of a major purchase and deployment of desktops to replace their current SGI desktops. Even Industrial Light & Magic is considering a major switch to the penguin OS.
And this isn't the infrastructure saying they will support Linux, like IBM or Compaq or HP announcing they will support the OS--it's the end users demanding it from suppliers of applications and hardware. Back in June 2001, Ray Feeney, technology committee chair of the Visual Effects Society said, "For the high-end part of movie making, 80-90% will be Linux-based inside of 18 months. Everything is going Linux." This sort of mass migration has never happened before in the Linux world. The tidal wave is here.
Understanding how this wave was formed requires some understanding of the industry itself. Effects studios talk about movie production as pipelines, the set of processes required to create effects and integrate them into a movie. A pipeline has two distinct sides to it: the graphic workstation and the renderfarm. The latter is like any other room full of servers, crunching away on any given problem. In this case, the problem is producing the 3-D imagery from models fed to the farm by the many artists working for the studio. The artists work on the other end of the pipeline, on the graphic workstations.
The first ripple in this tidal surge came with the use of Linux by Digital Domain to render frames for the movie Titanic. Involved in this film was well-known Linux graphics guru Daryll Strauss, who covered this story for Linux Journal back in February 1998. At the time, Daryll used a room full of Alpha-based Linux systems networked together to render some of the water scenes used in the movie. In this early stage, Linux still was used in its traditional role as a back-end server. The front-end graphics workstations were still primarily the domain of SGI IRIX systems.
In 1999, SideFX software ported their very popular (and very expensive) high-end 3-D modeling and animation package, Houdini, to Linux. Linux Journal again covered the story, this time in an interview I did with SideFX's Director of Research and Development, Paul Salvini. Houdini is an artist's tool used to create the models that renderfarms crunch on. At the time that Houdini was ported, Linux still had graphic-related limitations, such as a lack of support for hardware-accelerated OpenGL (a de facto industry standard for doing 3-D applications and games). This created a chicken-and-egg problem, according to Salvini. Doing a product like this for Linux required hardware acceleration to make it really viable, but hardware acceleration often requires applications in order to warrant drivers to be written." Drivers from video card makers weren't being written because there were no applications that needed them, and applications weren't being written because no drivers were available. SideFX sidestepped the issue by using software-accelerated OpenGL, a slower and problematic alternative that didn't require special video card drivers. Still, it was enough to entice the VFX industry toward Linux. It also provided motivation to graphics card vendors to provide both assistance to XFree86 and to begin work on their own proprietary drivers.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide