The Best Game in Town
In October 2006, Terra Soft announced its plan to build the world's first supercomputing cluster using the Sony PlayStation 3 (PS3), which utilizes the IBM Cell Broadband Engine and the Linux operating system. The idea emerged when Sony Computer Entertainment came knocking on Terra Soft's door, interested in showing that the PS3 is more than merely a game box. After building a 3,000-sq-ft supercomputing facility, located at Terra Soft's headquarters, and adding a heavy dose of good old-fashioned tinkering, the cluster is well underway. Terra Soft's CEO Kai Staats called the building of the PS3 cluster a “highlight of [his] time in this industry”. We caught up with Kai recently for an insider's view on the PS3 cluster.
LJ: Thank you for agreeing to talk with us, Kai. Tell us, why did Sony come to Terra Soft to build this cluster?
KS: Terra Soft has, for eight years, dedicated itself to the Power architecture, providing a leading Linux OS for systems built upon the IBM and Freescale CPUs, such as Apple's PowerPC product line. This experience and expertise gave Sony the confidence that Terra Soft would provide a high-quality end-user experience with professional support.
LJ: The PS3 cluster you have created together with Sony is an interesting application of what is marketed primarily as a home-entertainment machine. The PS3 is really a flexible, powerful machine, isn't it?
KS: Yes, the PS3 is both. I believe we are experiencing an interesting paradigm shift, from three decades of personal computers competing with dedicated game boxes to the industry's first game box offering true personal computer functionality.
Sony recognizes that, with its Cell Processor, the PS3 is not just another image processing engine, but a full-featured, fully capable home computer and lightweight development workstation. This is a tremendous market differentiator.
At home, the PS3 elegantly consolidates the CD, DVD, MP3 player and home computer into a single “appliance”. In supercomputing, the PS3 offers an inexpensive, lightweight compute node. Not designed to compete with the Mercury and IBM Cell blades, the PS3 enables individuals and labs to develop and optimize code for this new nine-core architecture within a limited budget. The same code seamlessly migrates to the high-performance Cell products.
LJ: We're curious to know more about the significance of the Cell Processor.
KS: The PS3 is built upon the Cell Broadband Engine, a nine-core CPU designed by Sony, Toshiba and IBM (the STI consortium). It provides an exceptional front-side bus performance.
LJ: Does the Cell's 1+8 multicore processing environment behave like a true eight-core processor, or is there a significant difference?
KS: The first core is the PPU, an IBM 970-compatible unit. This means any Linux application designed to function for the Apple G5 or IBM JS21 (for example) will operate seamlessly on this core. The additional eight SPEs (Synergetic Processing Engines) provide eight additional cores that may be addressed as CPUs (as compared to DSPs), enabling a unique and powerful single-chip processing environment. By keeping the code on the front side (as compared to dropping down to the north bridge as with historic, multiple CPU configurations), the performance is maximized.
LJ: A critical part of realizing the PS3 cluster appears to be Y-HPC, your cluster-construction suite. What are Terra Soft's innovations there?
KS: Simply stated, the Y-HPC cluster-construction suite delivers node images to compute nodes. But, around this core function is the means to manage multiple, unique node images and node “personalities” that modify any given node image to perform various designated tasks. Nodes may be deployed as an NFS server, storage server or compute node (for example), based upon the personality configuration.
Y-HPC integrates a full command-line syntax as well as a graphical user interface. And, Y-HPC may be deployed, both server and nodes, on x86, x86-64 and Power (G3, G4, G5, IBM JS20/21, p5 and Cell). Furthermore, Y-HPC is the first and only cluster-construction suite for IBM, Mercury and Sony Cell.
Although Y-HPC does incorporate some basic cluster node monitoring tools, it is not designed to replace Cluster Resources Moab. Instead, it is designed to integrate with Torque and Moab for a complete build-to-run solution.
Currently in a beta v2.0 release, Y-HPC is being shipped to key customers in addition to its deployment on our internal PS3 cluster.
LJ: How does Yellow Dog Linux play into other Cell-based systems?
KS: In the fall of 2005, Mercury Computer engaged Terra Soft to develop and maintain a commercial Linux OS for its Cell-based systems. This was first announced at SC2005, Seattle (www.terrasoftsolutions.com/news/2005/2005-11-15.shtml).
Mercury began shipping IBM BladeCenter form-factor Cell blades in January 2006 with Yellow Dog Linux pre-installed. Terra Soft continues to maintain and develop Yellow Dog Linux for Mercury's Cell-based products, with forthcoming support for the PCI form-factor CAB and 1U-rackmount form-factor “pizza box” node (mc.com/products/boards.cfm).
LJ: You'll be using this cluster for bioinformatics. Can you explain for our readers what bioinformatics is? Why does this particular cluster lend itself to this application?
KS: Wikipedia explains, “The terms bioinformatics and computational biology are often used interchangeably. However, bioinformatics more properly refers to the creation and advancement of algorithms, computational and statistical techniques, and theory to solve formal and practical problems posed by or inspired from the management and analysis of biological data.”
I would add, at a more basic level, bioinformatics includes the comparison of gene sequences between two or more organisms. For instance, as a bacteria or virus mutates, one or more of its genes differs from the previous strand. Once sequenced (the process by which the genes are identified, labeled and placed into a database), bioinformatics offers a means by which the pre-mutation and post-mutation gene sequences can be compared and better understood.
Y-Bio and the cluster offer a means by which thousands of gene sequences may be compared on a daily basis, in addition to other applications that will be introduced by the Department of Energy and “dot-edu” researchers.
LJ: Originally, Sony contracted with you to build two clusters based on the PS3 platform: a test cluster playfully dubbed E.coli and a production cluster called Amoeba. Is this what finally transpired?
KS: This original plan was to build both a test and production cluster from “beta” PS3 units, the 2U-rackmounts that Sony provided to the game developers prior to shipping hardware. Last fall, Sony determined it would prefer to use shipping PlayStation 3 units, the same as those found in retail stores worldwide.
This renewed effort was put in motion in January with the first 20 of a slated 128 nodes having arrived a little more than two weeks ago.
The first 20 nodes are on the rack shelves now. Two weekends ago, we created a reduced-footprint Yellow Dog Linux node image of roughly 680MB, just the essentials for a functional, flexible compute node. We are now updating our Y-HPC cluster-construction suite, with the folks from Cluster Resources applying Torque Resource Manager and the Moab Cluster Suite.
LJ: What has your experience been with the cluster thus far?
KS: The only true challenge was in working with a new bootloader (kboot) and the associated ramdisk image for netbooting. An afternoon or two of tweaking and breaking things before we found the magic combination of settings, and the PS3s were up and running as NFS-booting cluster nodes.
LJ: Can you give us an idea of what kind of performance improvements you achieve with the nine-core Cell Processor vs. other CPUs?
KS: During our Hack-a-Thon, we witnessed some interesting advances in code performance on the Cell Processor. In particular, the Mesa library experienced an 80x increase over the published performance on an Intel Woodcrest. Some folks from IBM have been working on BLAST for Cell with a noted 10–20x performance improvement.
LJ: Did you have to do anything special to the Linux kernel to exploit the Cell Processor's multicore architecture and support chips?
KS: We do not modify special kernel interfaces for accessing the Cell's SPEs, which have been included in the Linux kernel for some time now (I can't remember when Cell support was first added). We also include the Cell SDK that allows you to build, execute, run and debug applications that utilize the Cell's SPEs. [This question was answered with support from Owen Stampflee, Lead Developer of Yellow Dog Linux.]
LJ: Are there any disadvantages to using the PS3 vs. other hardware?
KS: The PS3 has just under 256MB of user-space RAM, where the IBM and Mercury Cell blades currently offer 512MB per CPU for a combined, shared 1GB of RAM on a dual-Cell (18-core) board, and far less RAM than what we have come to expect of modern 32-bit desktops and 64-bit workstations.
This limited RAM is constraining and, of course, things are not quite as snappy as with more. But, the 3.2GHz CPU and fast front-side bus do compensate well, as the desktop is quite usable. Even large footprint apps, such as OpenOffice.org, are functional. MythTV is impressive. But, with a very large image, The GIMP certainly would take a hit.
When optimizing code for the Cell SPEs, independent of the PS3 or IBM implementation, it is imperative that the algorithms themselves be reworked to enable pipelining, the continuous, steady-streaming of code and data through the relatively small cache. A stall in this pipeline, and performance is lost.
This may be an afternoon of rework, a few weeks or more, depending on the complexity of the code. But when successful, the end result is phenomenal with 32-bit floats taking advantage of both the AltiVec unit onboard each SPE and then the eight-way multicore spread.
LJ: It is impressive to see that you are co-developing and open-sourcing a range of life-sciences applications in tandem with universities and national labs, such as Lawrence Berkeley, Los Alamos and Oak Ridge. What progress have you made here?
KS: Everything we house in our on-campus, 3,000-sq-ft server room is given to the members of our HPC Consortium (www.hpc-consortium.net) free of charge. This currently includes seven IBM Cell blades (used for Cell code development and optimization) and the growing PS3 cluster interconnected via GbE to two G5s, a devel box and head node. Access is granted via a dedicated port on our fibre drop, which currently sits at 10Mb and scales in minutes to as high as 100Mb, if needed.
We are expecting to receive additional Cell pSeries, perhaps some GPU systems in the not-so-distant future.
All Consortium technical members (those whose proposals for Cell development have been accepted) are granted an account on all in-house systems.
The Consortium is an experiment in leveling the playing field, bringing together developers from all programming backgrounds and engaging them on the same hardware, in the same lists—advanced programmers from IBM working with newbies from universities, DOE lab and commercial employees collaborating during the Hack-a-Thon and then assisting each other with ongoing development.
LJ: Are there other applications besides bioinformatics that you will be targeting?
KS: Absolutely. At the Hack-a-Thon, there were projects to optimize the kernel, to build a Cell development toolset for Windows, the optimization of Mesa (mentioned earlier), visualization libraries and more. There are new projects in motion to bring multimedia apps to Cell, CFD libraries and film rendering. The potential is limited only by the determination of the individuals and teams that work with Cell, and the Consortium by no means limits the effort to a single field of study.
LJ: Thank you for sharing information about this innovative project, Kai. Good luck to you!
KS: Thank you, too!
James Gray is Linux Journal Products Editor and a graduate student in environmental science and management at Michigan State University. A Linux enthusiast since Slack 1.0 in 1993, he currently lives in Lansing, Michigan, with his wife and kitty.
James Gray is Products Editor for Linux Journal.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide