HPC

The High-Performance Computing Issue

Since the dawn of computing, hardware engineers have had one goal that's stood out above all the rest: speed. Sure, computers have many other important qualities (size, power consumption, price and so on), but nothing captures our attention like the never-ending quest for faster hardware (and software to power it). Faster drives. Faster RAM. Faster processors. Speed, speed and more speed. [Insert manly grunting sounds here.] What's the first thing that happens when a new CPU is released? Benchmarks to compare it against the last batch of processors.

Linux and Supercomputers

As we sit here, in the year Two Thousand and Eighteen (better known as "the future, where the robots live"), our beloved Linux is the undisputed king of supercomputing. Of the top 500 supercomputers in the world, approximately zero of them don't run Linux (give or take...zero).

ONNX: the Open Neural Network Exchange Format

An open-source battle is being waged for the soul of artificial intelligence. It is being fought by industry titans, universities and communities of machine-learning researchers world-wide. This article chronicles one small skirmish in that fight: a standardized file format for neural networks. At stake is the open exchange of data among a multitude of tools instead of competing monolithic frameworks.

LINBIT's DRBD Top

Many proprietary high-availability (HA) software providers require users to pay extra for system-management capabilities. Bucking this convention and driving down costs is LINBIT, whose DRBD HA software solution, part of the Linux kernel since 2009, powers thousands of digital enterprises.

JMR SiloStor NVMe SSD Drives

Compute-intensive workflows are the environments in which the newly developed JMR SiloStor NVMe family of SSD drives is designed to show its colors.

SUSE Linux Enterprise High Availability Extension

Historically, data replication has been available only piecemeal through proprietary vendors. In a quest to remediate history, SUSE and partner LINBIT announced a solution that promises to change the economics of data replication.

Three EU Industries That Need HPC Now

The success of High Performance Computing (HPC) relies in no small part on the OpenPOWER Foundation, which was founded in 2013. The reason this open ecosystem is so important is that it provided members open access to the IBM POWER8 technology, which resulted in huge advances in innovation.

Rogue Wave Software's TotalView for HPC and CodeDynamics

New versions of not just one but two dynamic analysis tools from Rogue Wave Software were unveiled recently to pleased developers everywhere. Upgraded TotalView for HPC and CodeDynamics, versions 2016.07, improve the diagnosis and correction of bugs, memory issues and crashes at execution.

IBM LinuxONE Provides New Options for Linux Deployment

In August 2015, IBM announced LinuxONE (www-03.ibm.com/press/us/en/pressrelease/47474.wss), anchored by two new Linux mainframe servers that capitalize on best-of-class mainframe security and performance, and that bring these strengths to open-source-based technologies and the Open Source community. Th

LUCI4HPC

Today's computational needs in diverse fields cannot be met by a single computer. Such areas include weather forecasting, astronomy, aerodynamics simulations for cars, material sciences and computational drug design. This makes it necessary to combine multiple computers into one system, a so-called computer cluster, to obtain the required computational power.

Jailhouse

Because you're a reader of Linux Journal, you probably already know that Linux has a rich virtualization ecosystem. KVM is the de facto standard, and VirtualBox is widely used for desktop virtualization. Veterans should remember Xen (it's still in a good shape, by the way), and there is also VMware (which isn't free but runs on Linux as well).

High-Availability Storage with HA-LVM

In recent years, there has been a trend in which data centers have been opting for commodity hardware and software over proprietary solutions. Why shouldn't they? It offers extremely low costs and the flexibility to build an ecosystem the way it is preferred. The only limitation is the extent of the administrator's imagination.

How YARN Changed Hadoop Job Scheduling

Scheduling means different things depending on the audience. To many in the business world, scheduling is synonymous with workflow management. Workflow management is the coordinated execution of a collection of scripts or programs for a business workflow with monitoring, logging and execution guarantees built in to a WYSIWYG editor.