PathScale InfiniPath Interconnect

 in
InfiniBand and AMD HyperTransport are made for each other just like soup and...something that goes with soup.

Most vendors do not publish their message rate, instead putting out their peak bandwidth and latency. But bandwidth varies with the size of the message, and peak bandwidth is achieved only at message sizes much larger than most applications generate. For most clustered applications, the actual throughput of the interconnect is a fraction of peak, because few clustered applications pass large messages back and forth between nodes. Rather, applications running on clusters pass a large number of very small (8–1,024 byte) messages back and forth as nodes begin and finish processing their small pieces of the overall task.

This means that for most applications, the number of simultaneous messages that can be passed between nodes, or message rate, will tend to limit the performance of the cluster more than the peak bandwidth of the interconnect.

As end users attempt to solve more granular problems with bigger clusters, the average message size goes down and the overall number of messages goes up. According to PathScale's testing with the WRF modeling application, the average number of messages increases from 46,303 with a 32-node application to 93,472 with a 512-node application, while the mean message size decreases from 67,219 bytes with 32 nodes to 12,037 bytes with 512 nodes. This means that the InfiniPath InfiniBand adapter will become more effective as the number of nodes increases. This is borne out in other tests with large-scale clusters running other applications.

For developers, there is little difference between developing a standard MPI application and one that supports InfiniPath. Required software is limited to some Linux drivers and the InfiniPath software stack. Table 1 shows the versions of Linux that have been tested with the InfiniPath 1.2 release. PathScale also offers the EKOPath Compiler Suite version 2.3, which includes high-performance C, C++ and Fortran 77/90/95 compilers as well as support for OpenMP 2.0 and PathScale-specific optimizations. But the compiler suite is not required to develop InfiniPath applications because the InfiniPath software environment supports gcc, Intel and PGI compilers as well. The base software provides an environment for high-performance MPI and IP applications.

Table 1. The InfiniPath 1.2 release has been tested on the following Linux distributions for AMD Opteron systems (x86_64).

Linux ReleaseVersion Tested
Red Hat Enterprise Linux 42.6.9
CentOS 4.0-4.2 (Rocks 4.0-4.2)2.6.9
Red Hat Fedora Core 32.6.11, 2.6.12
Red Hat Fedora Core 42.6.12, 2.6.13, 2.6.14
SUSE Professional 9.32.6.11
SUSE Professional 10.02.6.13

The optimized ipath_ether Ethernet driver provides high-performance networking support for existing TCP- and UDP-based applications (in addition to other protocols using Ethernet), with no modifications required to the application. The OpenIB (Open Fabrics) driver provides complete InfiniBand and OpenIB compatibility. This software stack is freely available as a download on their Web site. It currently supports IP over IB, verbs, MVAPICH and SDP (Sockets Direct Protocol).

PathScale offers a trial program—you can compile and run your application on its 32-node cluster to see what performance gains you can attain. See www.pathscale.com/cbc.php.

In addition, you can test your applications on the Emerald cluster at the AMD Developer Center, which offers 144 dual-socket, dual-core nodes, for a total of 576 2.2GHz Opteron CPUs connected with InfiniPath HTX adapters and a SilverStorm InfiniBand switch.

Tests performed on this cluster have shown excellent scalability at more than 500 processors, including the LS-Dyna three-vehicle collision results posted at www.topcrunch.org. See Table 2 for a listing of the top 40 results of the benchmark. Notice that the only other cluster in the top ten is the much more expensive per node Cray XD1 system.

Table 2. LS-Dyna Three-Vehicle Collision Results, Posted at www.topcrunch.org

Result (lower is better)ManufacturerCluster NameProcessorsNodes x CPUs x Cores
184Cray, Inc.CRAY XDI/RapidArrayAMD dual-core Opteron 2.2GHZ64 x 2 x 2 = 256
226Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron2.2GHz64 x 2 x 1 = 128
239Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz32 x 2 x 2 = 128
239Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz64 x 2 x 2 = 256
244Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz64 x 2 x 1 = 128
258Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz48 x 2 x 1 = 96
258Rackable Systems/AMD Emerald/PathScaleInfiniband/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz64 x 1 x 2 = 128
268Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz48 x 2 x 1 = 96
268Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz32 x 2 x 2 = 128
280Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz24 x 2 x 2 = 96
294Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz48 x 1 x 2 = 96
310Galactic Computing (Shenzhen) Ltd.GT4000/InfiniBandIntel Xeon 3.6GHz64 x 2 x 1 = 128
315Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz32 x 2 x 1 = 64
327Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz32 x 2 x 1 = 64
342Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz16 x 2 x 2 = 64
373Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz32 x 1 x 2 = 64
380Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.2GHz32 x 2 x 1 = 64
384Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz24 x 2 x 1 = 48
394Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz16 x 2 x 2 = 64
399Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz24 x 2 x 1 = 48
405Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.2GHz32 x 2 x 1 = 64
417Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz12 x 2 x 2 = 48
418Galactic Computing (Shenzhen) Ltd.GT4000/InfiniBandIntel Xeon 3.6GHz32 x 2 x 1 = 64
421HPItanium 2 CP6000/InfiniBand TopSpinIntel Itanium 2 1.5GHz32 x 2 x 1 = 64
429Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.2GHz32 x 2 x 1 = 64
452IBMe326/MyrinetAMD Opteron 2.4GHz32 x 2 x 1 = 64
455Cray, Inc.CRAY XD1 RapidArrayAMD Opteron 2.2GHz24 x 2 x 1 = 48
456HPItanium 2 Cluster/InfiniBandIntel Itanium 2 1.5GHz32 x 2 x 1 = 64
480PathScale, Inc.Microway Navion/PathScale InfiniPath/SilverStorm IB switchAMD Opteron 2.6GHz16 x 2 x 1 = 32
492Appro/Level 5 Networks1122Hi-81/Level 5 Networks - 1Gb Ethernet NICAMD dual-core Opteron 2.2GHz16 x 2 x 2 = 64
519HPItanium 2 CP6000/InfiniBand TopSpinIntel Itanium 2 1.5GHz24 x 2 x 1 = 48
527Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz16 x 2 x 1 = 32
529HPOpteron CP4000/TopSpin InfiniBandAMD Opteron 2.6GHz16 x 2 x 1 = 32
541Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz16 x 2 x 1 = 32
569Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz8 x 2 x 2 = 32
570HPItanium 2 Cluster/InfiniBandIntel Itanium 2 1.5GHz24 x 2 x 1 = 48
584Appro/Rackable/VerariRackable and Verari Opteron Cluster/InfiniCon InfiniBandAMD Opteron 2GHz64 x 1 x 1 = 64
586IBMe326/MyrinetAMD Opteron 2.4GHz16 x 2 x 1 = 32
591Self-made (SKIF program)/United Institute of Informatics ProblemsMinsk Opteron Cluster/InfiniBandAMD Opteron 2.2GHz (248)35 x 1 x 1 = 35

Logan Harbaugh is a freelance reviewer and IT consultant located in Redding, California. He has been working in IT for 20 years and has written two books on networking, as well as articles for most of the major computer publications.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Topcrunch results are dated by 2 months...

Anonymous's picture

Take a look at www.topcrunch.org today (8/4/2006). Intel has hit the Top 10 with only 32 dual-processor nodes. It will be interesting to see what Intel Xeon 5160 + Infinipath + PCI-Express + OpenIB will do to improve these results even further. Anyone up for this?

Topcrunch results

Anonymous's picture

Yes, take a look at the results, as they were achieved with Intel Xeon 5160 + InfiniBand + OpenIB. Using Infinipath instead of InfiniBand will just eat the CPU resources.

TopCrunch Results

Anonymous's picture

The prior comment is coming from someone who is obviously not familiar with InfiniPath as we have found that InfiniPath InfiniBand outperforms all other InfiniBand implementaions pretty much across the board - at least for MPI applications like LS-DYNA.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState