Hadoop isn’t just for Web 2.0 Big Data Anymore. Hadoop for HPC.

In 2004 Google released a white paper on their use of the MapReduce framework to perform fast and reliable executions of similar processes / data transformations & queries at terabyte scale. Yahoo then began the Hadoop project to support their search product. As a result of this, Apache elevated Hadoop, their MapReduce and DFS (distributed file management system) initiative out of Nutch, their open source search project.

Although technically Hadoop is still in pre-release 1.0, it has proven to be stable and useful for Big Data web 2.0 applications. When you are using Google, LinkedIn, Facebook, Twitter and Yahoo! you are running on Hadoop.

What about Hadoop for High Performance Computing with scientific applications?  It certainly has its place and a basic understanding of Hadoop helps you to understand where you can take advantage of Hadoop in HPC.

Firstly, what is MapReduce?  MapReduce is a methodology of performing parallel computations on very large volumes of data , by dividing the workload across a large number of similar machines, called ‘nodes’. Map Reduce methodology enables linear scalability through good data and file management. Additionally, Map Reduce differs from other methodologies in that it relies on nodes which are servers with attendant disk storage. Work is allocated to these storage server nodes based upon where the data is, as opposed to moving data to where processing occurs. This dramatically accelerates applications which process Big Data sets.

With Map – Reduce, you ‘map’ your input data to  the type of output you desire using some function that is replicable. For instance in manipulating strings by substituting a space for a comma in all input data. Or counting the number of occurrences of each word in a book. ‘Reducing’ aggregates the mapped data together into useful results, perhaps through functions such as addition and subtraction.

Much like RedHat with Linux, there are now commercial releases of Hadoop such as Cloudera that provide tools to simplify Hadoop implementation as well as reliable technical support. Hadoop itself provides built-in fault tolerance through triplicate copies of data distributed across processing nodes, enabling a robust implementation ‘out of the box’. Whereas GPFS and Lustre have scaled across hundreds of servers, known Hadoop implementations have successfully scaled across tens of thousands of nodes.

So what does all this mean for HPC, scientific and engineering applications?  Microway sees Hadoop as an excellent addition to the stack for data intensive scientific applications. This can include bioinformatics, physics and weather modeling applications.  Hadoop can also accelerate science when the workloads include a series of queries of very large data sets. Additionally, when scaling science from the desktop up to larger workloads, Hadoop can provide an effective transition model.

A few examples of Microway Hadoop solutions include the NumberSmasher 1U, 2U and 4U servers.. With one to four multi-core Xeon CPUs, 512GB memory and up to 120TB storage, the NumberSmasher servers are flexible and cost-effective. Microway will build your cluster for you – whether it’s four nodes or a hundred nodes.

We speak HPC, and we speak Hadoop! To learn more about how Hadoop can accelerate your science and engineering workloads feel free to reach a specialist at wespeakhpc@microway.com.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState