HPC Cluster Grant Accepting Applications!

Silicon Mechanics, Inc. has announced the open submission period for its 4th annual Research Cluster Grant Program.  This competitive grant will award two complete high performance compute clusters to two institutions of higher education and research. The competition is open to all US and Canadian qualified post-secondary institutions, university-affiliated research institutions, non-profit research institutions, and researchers at federal labs with university affiliations.

For the previous three years, an HPC cluster has been awarded to only one institution. Previous winners were Wayne State University in 2014, Tufts University in 2013, and Saint Louis University in 2012, the grant’s inaugural year.  This year’s expansion of the program reflects the program’s success in aiding research at the three prior recipient institutions. In addition, Silicon Mechanics has experienced significant organic growth over the past several years, enabling it to grow the grant program to better meet needs within the research community.

“We designed the cluster grant to provide resources to researchers who were unlikely to receive grants through traditional grant-funding programs due to the interdisciplinary, collaborative nature of their research or other similar factors,” said Art Mann, Silicon Mechanics’ education/research/government vertical group manager. “We have seen the impact that this program has had on our recipient institutions over the past three years, and felt that it was crucial not just to continue the program but to expand it, allowing us to support even more impactful research.”

Applications for the grant are now open, and details on the grant rules, application requirements, and cluster technical specifications are available at www.researchclustergrant.com. Submissions will be accepted through March 1, 2015 and the grant recipients will be announced on or before March 31, 2015.  Silicon Mechanics’ partners for this year’s grant clusters were Intel, NVIDIA, Mellanox, Supermicro, Kingston, Bright Computing, Seagate, and LSI Logic.  The specs for the equipment to be awarded are below. Please try not to salivate excessively!

Cluster Configuration

Summary:

  • 2U head node with storage
  • 4 GPU / compute nodes
  • Gigabit & InfiniBand Networking
  • Rack with Power Distribution

One 2U head node with storage, featuring 2 Intel Xeon E5-2680v2 processors and 128 GB of Kingston DDR3-1600 RAM, 7.2 TB of Seagate Savvio 10K SAS storage controlled by an 8-port LSI RAID card, and 2 mirrored Intel Enterprise SSDs for OS storage. Network and connectivity provided by a Mellanox ConnectX-3 FDR InfiniBand network adapter and an integrated Intel i350 gigabit Ethernet controller. Cluster management and job submission are provided by Bright Cluster Manager.

  • CPU: 2 x Intel Xeon E5-2680v2, 2.8 GHz (10-Core, 115W)
  • RAM: 128GB (8 x 16GB DDR3-1600 Registered ECC DIMMs)
  • Integrated NIC: Intel i350 Dual-Port Gigabit Ethernet Controller
  • InfiniBand: Mellanox Single-Port ConnectX-3 FDR InfiniBand Network Adapter
  • Management: Integrated IPMI 2.0 & KVM with Dedicated LAN
  • Hot-Swap Drives: 8 x 900GB Seagate Savvio 10K.6 (6Gb/s, 10K RPM, 64MB Cache) 2.5” SAS Hard Drives
  • OS Drives: 2 x 80GB Intel DC S3500 Series MLC (6Gb/s, 0.3 DWPD) 2.5" SATA SSDs
  • Drive Controller: LSI 9271-8i (8-Port Internal) 6Gb/s SAS RAID with Cache Vault Module
  • Power Supply: Redundant 740W Power Supplies, 80 PLUS Platinum Certified
  • OS: Current CentOS Distribution
  • Cluster Management: Bright Cluster Manager Advanced Edition - 1 Year Maintenance and Support

One 4U 4-node compute / GPU system (4U total), each node featuring 2 Intel Xeon E5-2680v2 processors, 128 GB of Kingston DDR3-1600 RAM, 2 NVIDIA Tesla K40m GPU accelerators, and 2 400 GB Intel DC S3700 Enterprise SATA SSDs. Network and connectivity provided by Mellanox ConnectX-3 FDR InfiniBand network adapters and integrated Intel i350 gigabit Ethernet controllers in each node. Cluster management and job submission are provided by Bright Cluster Manager running on the current version of the CentOS distribution.

  • CPU: 2 x Intel Xeon E5-2680v2, 2.8 GHz (10-Core, 115W)
  • RAM: 128GB (8 x 16GB DDR3-1600 Registered ECC DIMMs)
  • Integrated NIC: Intel Dual-Port i350 Gigabit Ethernet Controller
  • InfiniBand: Mellanox Single-Port ConnectX-3 FDR InfiniBand Network Adapter
  • Management: Integrated IPMI 2.0 & KVM with Dedicated LAN
  • GPU: 2 x NVIDIA Tesla K40m GPU Accelerators
  • Hot-Swap Drives: 2 x 400GB Intel DC S3700 Series HET-MLC (6Gb/s, 10 DWPD) 2.5" SATA SSDs
  • OS: Current CentOS Distribution
  • Cluster Management: Bright Cluster Manager Advanced Edition with 1 Year Maintenance and Support

One 24U APC Netshelter SX standard rack enclosure, featuring a 1U Mellanox 18-port SwitchX-2 FDR InfiniBand switch for communications and a 1U HP ProCurve gigabit Ethernet switch for management. Metered PDUs and FDR InfiniBand and Ethernet cabling provided.

  • Rack: APC NetShelter SX 24U Standard Rack Enclosure, 600mm (W) x 1070mm (D)
  • InfiniBand: Mellanox 18-Port SwitchX-2 FDR Infiniband Unmanaged Switch with 1 Year Silver Support
  • Ethernet: HP 48-Port ProCurve 1GbE Managed Switch
  • Power Distribution: APC Metered Rack PDU, 20A/120V
  • Interconnects and Cabling:
    • Mellanox FDR InfiniBand Passive Copper Cables
    • Cat6a Ethernet Networking Cables

Load Disqus comments