PathScale InfiniPath Interconnect
Listing 1. b_eff output
The effective bandwidth is b_eff = 5149.154 MByte/s on 16 processes ( = 321.822 MByte/s * 16 processes) Ping-pong latency: 1.352 microsec Ping-pong bandwidth: 923.862 MByte/s at Lmax= 1.000 MByte (MByte/s=1e6 Byte/s) (MByte=2**20 Byte) system parameters : 16 nodes, 128 MB/node system name : Linux hostname : cbc-01 OS release : 2.6.12-1.1380_FC3smp OS version : #1 SMP Wed Oct 19 21:05:57 EDT 2005 machine : x86_64 Date of measurement: Thu Jan 12 14:20:52 2006
Most vendors do not publish their message rate, instead putting out their peak bandwidth and latency. But bandwidth varies with the size of the message, and peak bandwidth is achieved only at message sizes much larger than most applications generate. For most clustered applications, the actual throughput of the interconnect is a fraction of peak, because few clustered applications pass large messages back and forth between nodes. Rather, applications running on clusters pass a large number of very small (8–1,024 byte) messages back and forth as nodes begin and finish processing their small pieces of the overall task.
This means that for most applications, the number of simultaneous messages that can be passed between nodes, or message rate, will tend to limit the performance of the cluster more than the peak bandwidth of the interconnect.
As end users attempt to solve more granular problems with bigger clusters, the average message size goes down and the overall number of messages goes up. According to PathScale's testing with the WRF modeling application, the average number of messages increases from 46,303 with a 32-node application to 93,472 with a 512-node application, while the mean message size decreases from 67,219 bytes with 32 nodes to 12,037 bytes with 512 nodes. This means that the InfiniPath InfiniBand adapter will become more effective as the number of nodes increases. This is borne out in other tests with large-scale clusters running other applications.
For developers, there is little difference between developing a standard MPI application and one that supports InfiniPath. Required software is limited to some Linux drivers and the InfiniPath software stack. Table 1 shows the versions of Linux that have been tested with the InfiniPath 1.2 release. PathScale also offers the EKOPath Compiler Suite version 2.3, which includes high-performance C, C++ and Fortran 77/90/95 compilers as well as support for OpenMP 2.0 and PathScale-specific optimizations. But the compiler suite is not required to develop InfiniPath applications because the InfiniPath software environment supports gcc, Intel and PGI compilers as well. The base software provides an environment for high-performance MPI and IP applications.
Table 1. The InfiniPath 1.2 release has been tested on the following Linux distributions for AMD Opteron systems (x86_64).
|Linux Release||Version Tested|
|Red Hat Enterprise Linux 4||2.6.9|
|CentOS 4.0-4.2 (Rocks 4.0-4.2)||2.6.9|
|Red Hat Fedora Core 3||2.6.11, 2.6.12|
|Red Hat Fedora Core 4||2.6.12, 2.6.13, 2.6.14|
|SUSE Professional 9.3||2.6.11|
|SUSE Professional 10.0||2.6.13|
The optimized ipath_ether Ethernet driver provides high-performance networking support for existing TCP- and UDP-based applications (in addition to other protocols using Ethernet), with no modifications required to the application. The OpenIB (Open Fabrics) driver provides complete InfiniBand and OpenIB compatibility. This software stack is freely available as a download on their Web site. It currently supports IP over IB, verbs, MVAPICH and SDP (Sockets Direct Protocol).
PathScale offers a trial program—you can compile and run your application on its 32-node cluster to see what performance gains you can attain. See www.pathscale.com/cbc.php.
In addition, you can test your applications on the Emerald cluster at the AMD Developer Center, which offers 144 dual-socket, dual-core nodes, for a total of 576 2.2GHz Opteron CPUs connected with InfiniPath HTX adapters and a SilverStorm InfiniBand switch.
Tests performed on this cluster have shown excellent scalability at more than 500 processors, including the LS-Dyna three-vehicle collision results posted at www.topcrunch.org. See Table 2 for a listing of the top 40 results of the benchmark. Notice that the only other cluster in the top ten is the much more expensive per node Cray XD1 system.
Table 2. LS-Dyna Three-Vehicle Collision Results, Posted at www.topcrunch.org
|Result (lower is better)||Manufacturer||Cluster Name||Processors||Nodes x CPUs x Cores|
|184||Cray, Inc.||CRAY XDI/RapidArray||AMD dual-core Opteron 2.2GHZ||64 x 2 x 2 = 256|
|226||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron2.2GHz||64 x 2 x 1 = 128|
|239||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||32 x 2 x 2 = 128|
|239||Rackable Systems/AMD Emerald/PathScale||InfiniPath/SilverStorm InfiniBand switch||AMD dual-core Opteron 2.2GHz||64 x 2 x 2 = 256|
|244||Cray, Inc.||CRAY XD1/RapidArray||AMD Opteron 2.4GHz||64 x 2 x 1 = 128|
|258||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||48 x 2 x 1 = 96|
|258||Rackable Systems/AMD Emerald/PathScale||Infiniband/SilverStorm InfiniBand switch||AMD dual-core Opteron 2.2GHz||64 x 1 x 2 = 128|
|268||Cray, Inc.||CRAY XD1/RapidArray||AMD Opteron 2.4GHz||48 x 2 x 1 = 96|
|268||Rackable Systems/AMD Emerald/PathScale||InfiniPath/SilverStorm InfiniBand switch||AMD dual-core Opteron 2.2GHz||32 x 2 x 2 = 128|
|280||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||24 x 2 x 2 = 96|
|294||Rackable Systems/AMD Emerald/PathScale||InfiniPath/SilverStorm InfiniBand switch||AMD dual-core Opteron 2.2GHz||48 x 1 x 2 = 96|
|310||Galactic Computing (Shenzhen) Ltd.||GT4000/InfiniBand||Intel Xeon 3.6GHz||64 x 2 x 1 = 128|
|315||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||32 x 2 x 1 = 64|
|327||Cray, Inc.||CRAY XD1/RapidArray||AMD Opteron 2.4GHz||32 x 2 x 1 = 64|
|342||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||16 x 2 x 2 = 64|
|373||Rackable Systems/AMD Emerald/PathScale||InfiniPath/SilverStorm InfiniBand switch||AMD dual-core Opteron 2.2GHz||32 x 1 x 2 = 64|
|380||Cray, Inc.||CRAY XD1/RapidArray||AMD Opteron 2.2GHz||32 x 2 x 1 = 64|
|384||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||24 x 2 x 1 = 48|
|394||Rackable Systems/AMD Emerald/PathScale||InfiniPath/SilverStorm InfiniBand switch||AMD dual-core Opteron 2.2GHz||16 x 2 x 2 = 64|
|399||Cray, Inc.||CRAY XD1/RapidArray||AMD Opteron 2.4GHz||24 x 2 x 1 = 48|
|405||Cray, Inc.||CRAY XD1/RapidArray||AMD Opteron 2.2GHz||32 x 2 x 1 = 64|
|417||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||12 x 2 x 2 = 48|
|418||Galactic Computing (Shenzhen) Ltd.||GT4000/InfiniBand||Intel Xeon 3.6GHz||32 x 2 x 1 = 64|
|421||HP||Itanium 2 CP6000/InfiniBand TopSpin||Intel Itanium 2 1.5GHz||32 x 2 x 1 = 64|
|429||Cray, Inc.||CRAY XD1/RapidArray||AMD Opteron 2.2GHz||32 x 2 x 1 = 64|
|452||IBM||e326/Myrinet||AMD Opteron 2.4GHz||32 x 2 x 1 = 64|
|455||Cray, Inc.||CRAY XD1 RapidArray||AMD Opteron 2.2GHz||24 x 2 x 1 = 48|
|456||HP||Itanium 2 Cluster/InfiniBand||Intel Itanium 2 1.5GHz||32 x 2 x 1 = 64|
|480||PathScale, Inc.||Microway Navion/PathScale InfiniPath/SilverStorm IB switch||AMD Opteron 2.6GHz||16 x 2 x 1 = 32|
|492||Appro/Level 5 Networks||1122Hi-81/Level 5 Networks - 1Gb Ethernet NIC||AMD dual-core Opteron 2.2GHz||16 x 2 x 2 = 64|
|519||HP||Itanium 2 CP6000/InfiniBand TopSpin||Intel Itanium 2 1.5GHz||24 x 2 x 1 = 48|
|527||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||16 x 2 x 1 = 32|
|529||HP||Opteron CP4000/TopSpin InfiniBand||AMD Opteron 2.6GHz||16 x 2 x 1 = 32|
|541||Cray, Inc.||CRAY XD1/RapidArray||AMD Opteron 2.4GHz||16 x 2 x 1 = 32|
|569||Cray, Inc.||CRAY XD1/RapidArray||AMD dual-core Opteron 2.2GHz||8 x 2 x 2 = 32|
|570||HP||Itanium 2 Cluster/InfiniBand||Intel Itanium 2 1.5GHz||24 x 2 x 1 = 48|
|584||Appro/Rackable/Verari||Rackable and Verari Opteron Cluster/InfiniCon InfiniBand||AMD Opteron 2GHz||64 x 1 x 1 = 64|
|586||IBM||e326/Myrinet||AMD Opteron 2.4GHz||16 x 2 x 1 = 32|
|591||Self-made (SKIF program)/United Institute of Informatics Problems||Minsk Opteron Cluster/InfiniBand||AMD Opteron 2.2GHz (248)||35 x 1 x 1 = 35|
Logan Harbaugh is a freelance reviewer and IT consultant located in Redding, California. He has been working in IT for 20 years and has written two books on networking, as well as articles for most of the major computer publications.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Client-Side Performance
- Peppermint 7 Released
- Sony Settles in Linux Battle
- Libarchive Security Flaw Discovered
- Maru OS Brings Debian to Your Phone
- Git 2.9 Released
- Snappy Moves to New Platforms
- The Giant Zero, Part 0.x
- Profiles and RC Files
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide