PathScale InfiniPath Interconnect

by Logan G. Harbaugh

As the use of large clusters gains ground in academia and moves from the scientific world to the business world, many administrators are looking for ways to increase performance without significantly increasing the cost per node. Some may focus on CPU power/speed or the amount of RAM per node, relatively expensive components, to increase their horsepower. PathScale (recently acquired by QLogic) is taking a different approach, instead focusing on unleashing the computational power already contained in the cluster as a whole by allowing the “thoroughbred” processors built by Intel and AMD to move all the messages they are capable of generating.

By focusing on dramatically increasing the message traffic between nodes and by reducing the latency of those messages, applications running on clusters are able to run faster and scale higher than previously possible. And, the increased performance is achieved with the combination of inexpensive x86 servers with standard InfiniBand adapters and switches.

The InfiniPath InfiniBand cluster interconnect is available in two flavors: PCI Express for ubiquitous deployments with any motherboard and any processor, and directly connected to the HyperTransport bus for the absolute lowest latency. This article deals with the InfiniPath HyperTransport (or HTX) product line. Servers with motherboards that support InfiniPath HTX are available from more than 25 different system vendors, including Linux Networx, Angstrom, Microway, Verari and Western Scientific. In the near future, servers with HTX slots could be available from the larger tier-one computer system suppliers. Motherboards with HTX slots are currently shipping from Iwill (the DK8-HTX) and Supermicro (H8QC8-HTe), with additional offerings from Arima, ASUS, MSI and others coming soon. InfiniPath adapters, which can be used with just about any flavor of Linux, can be connected to any InfiniBand switch from any vendor. Also, for mixed deployments with InfiniBand adapters from other vendors, InfiniPath supports the OpenFabrics (formerly OpenIB) software stack (downloadable from the PathScale Web site).

What the InfiniPath HTX adapter does better than any other cluster interconnect is accept the millions of messages generated every second by fast, multicore processors and gets them to the receiving processor. Part of the secret is removing all the delays associated with bridge chips and the PCI bus, because traffic is routed over the much faster HyperTransport bus. In real-world testing, this produces a two- to three-times improvement in latency, and in real-world clustered applications, an increase in messages per second of ten times or more.

Message transmission rate is the unsung hero in the interconnect world, and by completely re-architecting its adapter, InfiniPath beats the next-best by more than ten times. Where the rest of the industry builds off-load engines, miniature versions of host servers with an embedded processor and separate memory, InfiniPath is based on a very simple, elegant design that does not duplicate the efforts of the host processor. Embedded processors on interconnect adapter cards are only about one-tenth the speed of host processors so they can't keep up with the number of messages those processors generate. By keeping things simple, InfiniPath avoids wasting CPU cycles on pinning cache and other housekeeping chores, required with off-load engines, and instead does real work for the end user. The beauty of this approach is that it not only results in lower CPU utilization per MB transferred, but it also has a lower memory footprint on host systems.

The reason a two- or three-times improvement in latency has such a large effect on the message rate (messages per second) is that low latency reduces the time that nodes spend waiting for the next communication at both ends, so all the processors substantially reduce wasted cycles spent waiting on adapters jammed with message traffic.

What does this mean for real-world applications? It will depend on the way the application uses messages, the sizes of those messages and how well optimized it is for parallel processing. In my testing, using a 17-node (16 compute nodes and one master node) cluster, I got a result of 5,149.154 MB/sec using the b_eff benchmark. This compares with results of 1,553–1,660 MB/sec for other InfiniBand clusters tested by the Daresbury Lab in 2005, and with a maximum of 2,413 MB/sec for any other cluster tested. The clusters tested all had 16 CPUs.

See Listing 1 for the results of the b_eff benchmark. The results of the Daresbury Lab study are available at, page 21.

Listing 1. b_eff output

The effective bandwidth is b_eff = 5149.154 MByte/s on 16 processes

( = 321.822 MByte/s * 16 processes)

Ping-pong latency: 1.352 microsec

Ping-pong bandwidth: 923.862 MByte/s at Lmax= 1.000 MByte

(MByte/s=1e6 Byte/s) (MByte=2**20 Byte)

system parameters : 16 nodes, 128 MB/node

system name : Linux

hostname : cbc-01

OS release : 2.6.12-1.1380_FC3smp

OS version : #1 SMP Wed Oct 19 21:05:57 EDT 2005

machine : x86_64

Date of measurement: Thu Jan 12 14:20:52 2006

Most vendors do not publish their message rate, instead putting out their peak bandwidth and latency. But bandwidth varies with the size of the message, and peak bandwidth is achieved only at message sizes much larger than most applications generate. For most clustered applications, the actual throughput of the interconnect is a fraction of peak, because few clustered applications pass large messages back and forth between nodes. Rather, applications running on clusters pass a large number of very small (8–1,024 byte) messages back and forth as nodes begin and finish processing their small pieces of the overall task.

This means that for most applications, the number of simultaneous messages that can be passed between nodes, or message rate, will tend to limit the performance of the cluster more than the peak bandwidth of the interconnect.

As end users attempt to solve more granular problems with bigger clusters, the average message size goes down and the overall number of messages goes up. According to PathScale's testing with the WRF modeling application, the average number of messages increases from 46,303 with a 32-node application to 93,472 with a 512-node application, while the mean message size decreases from 67,219 bytes with 32 nodes to 12,037 bytes with 512 nodes. This means that the InfiniPath InfiniBand adapter will become more effective as the number of nodes increases. This is borne out in other tests with large-scale clusters running other applications.

For developers, there is little difference between developing a standard MPI application and one that supports InfiniPath. Required software is limited to some Linux drivers and the InfiniPath software stack. Table 1 shows the versions of Linux that have been tested with the InfiniPath 1.2 release. PathScale also offers the EKOPath Compiler Suite version 2.3, which includes high-performance C, C++ and Fortran 77/90/95 compilers as well as support for OpenMP 2.0 and PathScale-specific optimizations. But the compiler suite is not required to develop InfiniPath applications because the InfiniPath software environment supports gcc, Intel and PGI compilers as well. The base software provides an environment for high-performance MPI and IP applications.

Table 1. The InfiniPath 1.2 release has been tested on the following Linux distributions for AMD Opteron systems (x86_64).

Linux ReleaseVersion Tested
Red Hat Enterprise Linux 42.6.9
CentOS 4.0-4.2 (Rocks 4.0-4.2)2.6.9
Red Hat Fedora Core 32.6.11, 2.6.12
Red Hat Fedora Core 42.6.12, 2.6.13, 2.6.14
SUSE Professional
SUSE Professional

The optimized ipath_ether Ethernet driver provides high-performance networking support for existing TCP- and UDP-based applications (in addition to other protocols using Ethernet), with no modifications required to the application. The OpenIB (Open Fabrics) driver provides complete InfiniBand and OpenIB compatibility. This software stack is freely available as a download on their Web site. It currently supports IP over IB, verbs, MVAPICH and SDP (Sockets Direct Protocol).

PathScale offers a trial program—you can compile and run your application on its 32-node cluster to see what performance gains you can attain. See

In addition, you can test your applications on the Emerald cluster at the AMD Developer Center, which offers 144 dual-socket, dual-core nodes, for a total of 576 2.2GHz Opteron CPUs connected with InfiniPath HTX adapters and a SilverStorm InfiniBand switch.

Tests performed on this cluster have shown excellent scalability at more than 500 processors, including the LS-Dyna three-vehicle collision results posted at See Table 2 for a listing of the top 40 results of the benchmark. Notice that the only other cluster in the top ten is the much more expensive per node Cray XD1 system.

Table 2. LS-Dyna Three-Vehicle Collision Results, Posted at

Result (lower is better)ManufacturerCluster NameProcessorsNodes x CPUs x Cores
184Cray, Inc.CRAY XDI/RapidArrayAMD dual-core Opteron 2.2GHZ64 x 2 x 2 = 256
226Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron2.2GHz64 x 2 x 1 = 128
239Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz32 x 2 x 2 = 128
239Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz64 x 2 x 2 = 256
244Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz64 x 2 x 1 = 128
258Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz48 x 2 x 1 = 96
258Rackable Systems/AMD Emerald/PathScaleInfiniband/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz64 x 1 x 2 = 128
268Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz48 x 2 x 1 = 96
268Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz32 x 2 x 2 = 128
280Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz24 x 2 x 2 = 96
294Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz48 x 1 x 2 = 96
310Galactic Computing (Shenzhen) Ltd.GT4000/InfiniBandIntel Xeon 3.6GHz64 x 2 x 1 = 128
315Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz32 x 2 x 1 = 64
327Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz32 x 2 x 1 = 64
342Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz16 x 2 x 2 = 64
373Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz32 x 1 x 2 = 64
380Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.2GHz32 x 2 x 1 = 64
384Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz24 x 2 x 1 = 48
394Rackable Systems/AMD Emerald/PathScaleInfiniPath/SilverStorm InfiniBand switchAMD dual-core Opteron 2.2GHz16 x 2 x 2 = 64
399Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz24 x 2 x 1 = 48
405Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.2GHz32 x 2 x 1 = 64
417Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz12 x 2 x 2 = 48
418Galactic Computing (Shenzhen) Ltd.GT4000/InfiniBandIntel Xeon 3.6GHz32 x 2 x 1 = 64
421HPItanium 2 CP6000/InfiniBand TopSpinIntel Itanium 2 1.5GHz32 x 2 x 1 = 64
429Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.2GHz32 x 2 x 1 = 64
452IBMe326/MyrinetAMD Opteron 2.4GHz32 x 2 x 1 = 64
455Cray, Inc.CRAY XD1 RapidArrayAMD Opteron 2.2GHz24 x 2 x 1 = 48
456HPItanium 2 Cluster/InfiniBandIntel Itanium 2 1.5GHz32 x 2 x 1 = 64
480PathScale, Inc.Microway Navion/PathScale InfiniPath/SilverStorm IB switchAMD Opteron 2.6GHz16 x 2 x 1 = 32
492Appro/Level 5 Networks1122Hi-81/Level 5 Networks - 1Gb Ethernet NICAMD dual-core Opteron 2.2GHz16 x 2 x 2 = 64
519HPItanium 2 CP6000/InfiniBand TopSpinIntel Itanium 2 1.5GHz24 x 2 x 1 = 48
527Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz16 x 2 x 1 = 32
529HPOpteron CP4000/TopSpin InfiniBandAMD Opteron 2.6GHz16 x 2 x 1 = 32
541Cray, Inc.CRAY XD1/RapidArrayAMD Opteron 2.4GHz16 x 2 x 1 = 32
569Cray, Inc.CRAY XD1/RapidArrayAMD dual-core Opteron 2.2GHz8 x 2 x 2 = 32
570HPItanium 2 Cluster/InfiniBandIntel Itanium 2 1.5GHz24 x 2 x 1 = 48
584Appro/Rackable/VerariRackable and Verari Opteron Cluster/InfiniCon InfiniBandAMD Opteron 2GHz64 x 1 x 1 = 64
586IBMe326/MyrinetAMD Opteron 2.4GHz16 x 2 x 1 = 32
591Self-made (SKIF program)/United Institute of Informatics ProblemsMinsk Opteron Cluster/InfiniBandAMD Opteron 2.2GHz (248)35 x 1 x 1 = 35

Logan Harbaugh is a freelance reviewer and IT consultant located in Redding, California. He has been working in IT for 20 years and has written two books on networking, as well as articles for most of the major computer publications.

Load Disqus comments