PathScale InfiniPath Interconnect
As the use of large clusters gains ground in academia and moves from the scientific world to the business world, many administrators are looking for ways to increase performance without significantly increasing the cost per node. Some may focus on CPU power/speed or the amount of RAM per node, relatively expensive components, to increase their horsepower. PathScale (recently acquired by QLogic) is taking a different approach, instead focusing on unleashing the computational power already contained in the cluster as a whole by allowing the “thoroughbred” processors built by Intel and AMD to move all the messages they are capable of generating.
By focusing on dramatically increasing the message traffic between nodes and by reducing the latency of those messages, applications running on clusters are able to run faster and scale higher than previously possible. And, the increased performance is achieved with the combination of inexpensive x86 servers with standard InfiniBand adapters and switches.
The InfiniPath InfiniBand cluster interconnect is available in two flavors: PCI Express for ubiquitous deployments with any motherboard and any processor, and directly connected to the HyperTransport bus for the absolute lowest latency. This article deals with the InfiniPath HyperTransport (or HTX) product line. Servers with motherboards that support InfiniPath HTX are available from more than 25 different system vendors, including Linux Networx, Angstrom, Microway, Verari and Western Scientific. In the near future, servers with HTX slots could be available from the larger tier-one computer system suppliers. Motherboards with HTX slots are currently shipping from Iwill (the DK8-HTX) and Supermicro (H8QC8-HTe), with additional offerings from Arima, ASUS, MSI and others coming soon. InfiniPath adapters, which can be used with just about any flavor of Linux, can be connected to any InfiniBand switch from any vendor. Also, for mixed deployments with InfiniBand adapters from other vendors, InfiniPath supports the OpenFabrics (formerly OpenIB) software stack (downloadable from the PathScale Web site).
What the InfiniPath HTX adapter does better than any other cluster interconnect is accept the millions of messages generated every second by fast, multicore processors and gets them to the receiving processor. Part of the secret is removing all the delays associated with bridge chips and the PCI bus, because traffic is routed over the much faster HyperTransport bus. In real-world testing, this produces a two- to three-times improvement in latency, and in real-world clustered applications, an increase in messages per second of ten times or more.
Message transmission rate is the unsung hero in the interconnect world, and by completely re-architecting its adapter, InfiniPath beats the next-best by more than ten times. Where the rest of the industry builds off-load engines, miniature versions of host servers with an embedded processor and separate memory, InfiniPath is based on a very simple, elegant design that does not duplicate the efforts of the host processor. Embedded processors on interconnect adapter cards are only about one-tenth the speed of host processors so they can't keep up with the number of messages those processors generate. By keeping things simple, InfiniPath avoids wasting CPU cycles on pinning cache and other housekeeping chores, required with off-load engines, and instead does real work for the end user. The beauty of this approach is that it not only results in lower CPU utilization per MB transferred, but it also has a lower memory footprint on host systems.
The reason a two- or three-times improvement in latency has such a large effect on the message rate (messages per second) is that low latency reduces the time that nodes spend waiting for the next communication at both ends, so all the processors substantially reduce wasted cycles spent waiting on adapters jammed with message traffic.
What does this mean for real-world applications? It will depend on the way the application uses messages, the sizes of those messages and how well optimized it is for parallel processing. In my testing, using a 17-node (16 compute nodes and one master node) cluster, I got a result of 5,149.154 MB/sec using the b_eff benchmark. This compares with results of 1,553–1,660 MB/sec for other InfiniBand clusters tested by the Daresbury Lab in 2005, and with a maximum of 2,413 MB/sec for any other cluster tested. The clusters tested all had 16 CPUs.
See Listing 1 for the results of the b_eff benchmark. The results of the Daresbury Lab study are available at www.cse.clrc.ac.uk/disco/Benchmarks/commodity.2005/mfg_commodity.2005.pdf, page 21.
|PasswordPing Ltd.'s Exposed Password and Credentials API Service||Apr 28, 2017|
|Graph Any Data with Cacti!||Apr 27, 2017|
|Be Kind, Buffer!||Apr 26, 2017|
|Preparing Data for Machine Learning||Apr 25, 2017|
|openHAB||Apr 24, 2017|
|Omesh Tickoo and Ravi Iyer's Making Sense of Sensors (Apress)||Apr 21, 2017|
- Graph Any Data with Cacti!
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- The Weather Outside Is Frightful (Or Is It?)
- Simple Server Hardening
- Understanding Firewalld in Multi-Zone Configurations
- Server Technology's HDOT Alt-Phase Switched POPS PDU
- Preparing Data for Machine Learning
- Gordon H. Williams' Making Things Smart (Maker Media, Inc.)
- IGEL Universal Desktop Converter