Linux Powers Four-Wall 3-D Display
The cost per performance of PC clusters is making them a viable alternative to traditional high-end visualization supercomputers. Rapid evolution in commodity PC hardware has both driven down costs and accelerated obsolescent computing cycles. A general rule to follow for buying graphics capability from SGI is to budget $250,000 US per graphics pipe. For those who cannot afford to outfit an Onyx-class computer with four graphics pipes, extra raster managers may be added to two pipes to drive a four-wall display system. In contrast, our experimental cluster costs less than $1,000 US per node. With the addition of a video matrix switcher, the grand total was less than $15,000 US. Even with the fast obsolescence cycles, the price difference is so great that an organization could afford to replace or upgrade the graphics clusters many times during the lifetime of one SGI system. Another advantage of PCs is the wide availability of low-cost parts. Overall, the PCs are cost effective, powerful and flexible.
We present an experiment that integrates a commodity cluster into an existing four-wall display system—a Surround-Screen Visualization System (SSVR) from Mechdyne Corporation. The objective is to attain active stereo visualization on multiple walls using genlocking, swap-locking and data-locking capabilities.
High-end visualization supercomputers offer multiwall, active stereo visualization packaged together. Stereo presentation and coordination of scene graph data is taken care of automatically by the computer in hardware or by the invocation of proprietary software libraries. The cluster was designed from the beginning to attempt to replace aging SGI computing equipment used to drive our current four-wall display system. We are finding ourselves taxing the capabilities of an Onyx 2 system with Infinite Reality 2 graphics by demanding increasing numbers of polygons to be rendered while needing a fixed frame rate for active stereo. When implementing a cluster, these issues must be dealt with in order to produce a coherent scene across many screens.
Communication between the cluster nodes is vital. Data such as pixels, geometric primitives or even scene graph data is passed among the nodes. The way data is handled and the type of data passed greatly impacts the network bandwidth requirements of the cluster. Two basic approaches for setting up a graphics clustering communication software architecture are client/server and master/slave.
In the client/server approach, a single node serves data to the graphics rendering clients. The advantage to this arrangement is many applications may embed a server that works with the same rendering client nodes. This environment is very flexible. The disadvantage is a higher consumption of network bandwidth. Most client/server clusters rely upon relatively expensive Myrinet or gigabit Ethernet hardware.
The master/slave approach, used in this project, consists of multiple nodes, where each node of the graphics cluster locally stores and runs an identical copy of the graphics application. Consequently, only a small amount of information is required to be shared among the nodes, and network bandwidth becomes less of a concern. This information may simply include input device data and timestamps. In this configuration, the master node handles application state changes.
All graphics clusters must satisfy the following three requirements:
Genlocking: the process of synchronizing the video frames from each node in a cluster so that they produce a fluid, coherent image. Genlocking may be achieved through software or hardware.
Swap Locking: the process of synchronizing the frame buffer rendering and swapping. This is necessary because each view of a scene contains different amounts of data and numbers of polygons to render. These may produce different rendering times for each frame for each node.
Data Locking: the process of synchronizing the views to maintain consistency across the screens. This becomes an issue since each node in the cluster renders its frames from locally stored information.
We used a set of standard PC configurations equipped with MSI G4Ti4600 graphics adapters powered by NVIDIA's GeForce4 Ti graphics processing unit (GPU) and 128Mb of DDR video memory. Although not completely necessary, the PCs were identical, which made software installation easier. The PCs communicated via 100BaseT networking adapters and a 100BaseT switch.
The projectors of the SSVR are connected to an Extron CrossPoint Plus 124 matrix video switcher. The switcher is capable of accepting video input from 12 sources and output to four sources.
Since genlocking and data locking are handled in software through the parallel ports, a special box (Figure 2) was fabricated to handle the signaling appropriately. This box was also built from commercial off-the-shelf hardware for less than $20 US. Besides controlling the switcher, this box also outputs a genlocking signal to a set of Crystal Eyes infrared emitters.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide