Many proprietary high-availability (HA) software providers require users to pay extra for system-management capabilities. Bucking this convention and driving down costs is LINBIT, whose DRBD HA software solution, part of the Linux kernel since 2009, powers thousands of digital enterprises.

JMR SiloStor NVMe SSD Drives

Compute-intensive workflows are the environments in which the newly developed JMR SiloStor NVMe family of SSD drives is designed to show its colors.

SUSE Linux Enterprise High Availability Extension

Historically, data replication has been available only piecemeal through proprietary vendors. In a quest to remediate history, SUSE and partner LINBIT announced a solution that promises to change the economics of data replication.

Three EU Industries That Need HPC Now

The success of High Performance Computing (HPC) relies in no small part on the OpenPOWER Foundation, which was founded in 2013. The reason this open ecosystem is so important is that it provided members open access to the IBM POWER8 technology, which resulted in huge advances in innovation.

Rogue Wave Software's TotalView for HPC and CodeDynamics

New versions of not just one but two dynamic analysis tools from Rogue Wave Software were unveiled recently to pleased developers everywhere. Upgraded TotalView for HPC and CodeDynamics, versions 2016.07, improve the diagnosis and correction of bugs, memory issues and crashes at execution.

IBM LinuxONE Provides New Options for Linux Deployment

In August 2015, IBM announced LinuxONE (www-03.ibm.com/press/us/en/pressrelease/47474.wss), anchored by two new Linux mainframe servers that capitalize on best-of-class mainframe security and performance, and that bring these strengths to open-source-based technologies and the Open Source community. Th


Today's computational needs in diverse fields cannot be met by a single computer. Such areas include weather forecasting, astronomy, aerodynamics simulations for cars, material sciences and computational drug design. This makes it necessary to combine multiple computers into one system, a so-called computer cluster, to obtain the required computational power.


Because you're a reader of Linux Journal, you probably already know that Linux has a rich virtualization ecosystem. KVM is the de facto standard, and VirtualBox is widely used for desktop virtualization. Veterans should remember Xen (it's still in a good shape, by the way), and there is also VMware (which isn't free but runs on Linux as well).

High-Availability Storage with HA-LVM

In recent years, there has been a trend in which data centers have been opting for commodity hardware and software over proprietary solutions. Why shouldn't they? It offers extremely low costs and the flexibility to build an ecosystem the way it is preferred. The only limitation is the extent of the administrator's imagination.

How YARN Changed Hadoop Job Scheduling

Scheduling means different things depending on the audience. To many in the business world, scheduling is synonymous with workflow management. Workflow management is the coordinated execution of a collection of scripts or programs for a business workflow with monitoring, logging and execution guarantees built in to a WYSIWYG editor.

Linux Containers and the Future Cloud

Linux-based container infrastructure is an emerging cloud technology based on fast and lightweight process virtualization. It provides its users an environment as close as possible to a standard Linux distribution.

SIDUS—the Solution for Extreme Deduplication of an Operating System

SIDUS (Single-Instance Distributing Universal System) was developed at Centre Blaise Pascal (Ecole normale supérieure de Lyon, Lyon, France), where one administrator alone is in charge of 180 stations. Emmanuel Quemener started SIDUS in February 2010, and he significantly cut his workload for administering this park of stations. SIDUS is now in use at the supercomputing centre PSM

Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer

Nowadays, high-performance server software (for example, the HTTP accelerator) in most cases runs on multicore machines. Modern hardware could provide 32, 64 or more CPU cores. In such highly concurrent environments, lock contention sometimes hurts overall system performance more than data copying, context switches and so on.

Introduction to MapReduce with Hadoop on Linux

When your data and work grow, and you still want to produce results in a timely manner, you start to think big. Your one beefy server reaches its limits. You need a way to spread your work across many computers. You truly need to scale out.