Hadoop isn’t just for Web 2.0 Big Data Anymore. Hadoop for HPC.
In 2004 Google released a white paper on their use of the MapReduce framework to perform fast and reliable executions of similar processes / data transformations & queries at terabyte scale. Yahoo then began the Hadoop project to support their search product. As a result of this, Apache elevated Hadoop, their MapReduce and DFS (distributed file management system) initiative out of Nutch, their open source search project.
Although technically Hadoop is still in pre-release 1.0, it has proven to be stable and useful for Big Data web 2.0 applications. When you are using Google, LinkedIn, Facebook, Twitter and Yahoo! you are running on Hadoop.
What about Hadoop for High Performance Computing with scientific applications? It certainly has its place and a basic understanding of Hadoop helps you to understand where you can take advantage of Hadoop in HPC.
Firstly, what is MapReduce? MapReduce is a methodology of performing parallel computations on very large volumes of data , by dividing the workload across a large number of similar machines, called ‘nodes’. Map Reduce methodology enables linear scalability through good data and file management. Additionally, Map Reduce differs from other methodologies in that it relies on nodes which are servers with attendant disk storage. Work is allocated to these storage server nodes based upon where the data is, as opposed to moving data to where processing occurs. This dramatically accelerates applications which process Big Data sets.
With Map – Reduce, you ‘map’ your input data to the type of output you desire using some function that is replicable. For instance in manipulating strings by substituting a space for a comma in all input data. Or counting the number of occurrences of each word in a book. ‘Reducing’ aggregates the mapped data together into useful results, perhaps through functions such as addition and subtraction.
Much like RedHat with Linux, there are now commercial releases of Hadoop such as Cloudera that provide tools to simplify Hadoop implementation as well as reliable technical support. Hadoop itself provides built-in fault tolerance through triplicate copies of data distributed across processing nodes, enabling a robust implementation ‘out of the box’. Whereas GPFS and Lustre have scaled across hundreds of servers, known Hadoop implementations have successfully scaled across tens of thousands of nodes.
So what does all this mean for HPC, scientific and engineering applications? Microway sees Hadoop as an excellent addition to the stack for data intensive scientific applications. This can include bioinformatics, physics and weather modeling applications. Hadoop can also accelerate science when the workloads include a series of queries of very large data sets. Additionally, when scaling science from the desktop up to larger workloads, Hadoop can provide an effective transition model.
A few examples of Microway Hadoop solutions include the NumberSmasher 1U, 2U and 4U servers.. With one to four multi-core Xeon CPUs, 512GB memory and up to 120TB storage, the NumberSmasher servers are flexible and cost-effective. Microway will build your cluster for you – whether it’s four nodes or a hundred nodes.
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Three More Lessons
- Django Models and Migrations
- August 2015 Issue of Linux Journal: Programming
- Hacking a Safe with Bash
- Secure Server Deployments in Hostile Territory, Part II
- The Controversy Behind Canonical's Intellectual Property Policy
- Shashlik - a Tasty New Android Simulator
- Huge Package Overhaul for Debian and Ubuntu
- General Relativity in Python
- Embed Linux in Monitoring and Control Systems