Introduction to MapReduce with Hadoop on Linux
MapReduce with Hadoop
Hadoop is mostly a Java framework, but the magically awesome Streaming utility allows us to use programs written in other languages. The program must only obey certain conventions for standard input and output (which we've already done).
You'll need Java 1.6.x or later (I used OpenJDK 7). The rest can and should be performed as a nonroot user.
Download the latest stable Hadoop tarball (see Resources). Don't use a distro-specific (.rpm or .deb) package. I'm assuming you downloaded hadoop-1.0.4.tar.gz. Unpack this and change into the hadoop-1.0.4 directory. The directory hadoop-1.0.4 and the files map.py, reduce.py and log.txt should be in /tmp. If not, adjust the paths in the examples below as necessary.
Run the job on Hadoop like so:
cd /tmp/hadoop-1.0.4 bin/hadoop jar \ contrib/streaming/hadoop-streaming-1.0.4.jar \ -mapper /tmp/map.py -reducer /tmp/reduce.py \ -input /tmp/log.txt -output /tmp/output
Hadoop will log some stuff to the console. Look for the following:
... INFO streaming.StreamJob: map 0% reduce 0% INFO streaming.StreamJob: map 100% reduce 0% INFO streaming.StreamJob: map 100% reduce 100% INFO streaming.StreamJob: Output: /tmp/output
This means our job completed successfully. I see a file called /tmp/output/part-00000, which contains just what we expect:
Now is a good time to pause, smile and reward yourself with a quad-shot grande iced caramel macchiato. You're a rockstar.
Figure 1. Here's what we did during the map and reduce steps. The transformations we performed allow us to run many mappers and reducers on as many machines as we want. Hadoop takes care of the gory details. It starts mappers and reducers, passes data between them and spits out the answer.
If you've got everything working so far, try starting your own cluster too! Running Hadoop on a single physical machine with multiple Java virtual machines is called pseudo-distributed operation.
Pseudo-distributed operation requires some configuration. The user you're running Hadoop as must also be able to make SSH passwordless connections to localhost. Installing and configuring this is beyond the scope of this article, but you'll find more information in the "Single Node Setup" tutorial mentioned in Resources. If you started with the 1.0.4 tarball release recommended above, the tutorial should work verbatim on any standard GNU/Linux distribution.
If you set up pseudo-distributed (or distributed) Hadoop, you'll gain the benefit of two spartan-but-useful Web interfaces. The NameNode Web interface allows you to browse logs and browse the Hadoop distributed filesystem. The JobTracker Web interface allows you to monitor MapReduce jobs and debug problems.
Figure 2. NameNode Web Interface
Figure 3. JobTracker Web Interface
Beautifully Simple Python MapReduce
You may wonder why reduce.py (above) is a convoluted mini-state machine. This is because hostnames change in the input lines provided by Hadoop. The Dumbo Python library (see Resources) hides this detail of Hadoop. Dumbo lets us focus even more tightly on our mapping and reducing.
In Dumbo, our MapReduce implementation becomes:
def mapper(key, value): if 'invalid user' in value: yield value.split(':'), 1 def reducer(key, values): yield key, sum(values) if __name__ == '__main__': import dumbo dumbo.run(mapper, reducer)
The state machine is gone. Dumbo takes care of grouping by key (hostname).
Save the above code in a file called /tmp/smart.py. Install Dumbo. See Resources for a link, and don't worry, it's easy. Once Dumbo is installed, run the code:
cd /tmp dumbo start smart.py -hadoop hadoop-1.0.4 \ -input log.txt -output totals \ -outputformat text
Finally, examine the output:
The content should match our earlier result from Hadoop Streaming.
Hadoop is great for one-time jobs and off-line batch processing, especially where the data is already in the Hadoop filesystem and will be read many times. My first example makes more sense if you assume this. Perhaps the job must be run daily and must finish within a few minutes.
Consider some cases when Hadoop is the wrong tool. Small dataset? Don't bother. In a one-meter race between a rocket and a scooter, the scooter is gone before the rocket's engines are started. Transactional data storage for a Web site? Try MySQL or MongoDB instead.
Hadoop also won't help you process data as it arrives. This is often referred to as "real time" or "streaming". For that, consider Storm (see Resources for more information).
With practice, you'll quickly be able to discern when Hadoop is the right tool for the job.
See http://hadoop.apache.org/docs/current/single_node_setup.html for information on how to run a pseudo-distributed Hadoop cluster.
Check out Dumbo at http://projects.dumbotics.com/dumbo if you want to do more with MapReduce in Python. See https://github.com/klbostee/dumbo/wiki/Building-and-installing for install instructions and https://github.com/klbostee/dumbo/wiki/Short-tutorial for an excellent tutorial.
See https://github.com/nathanmarz/storm for information on Storm, a real-time distributed computing system.
To run Storm and Hadoop and manage both centrally, check out the Mesos project at http://www.mesosproject.org.
Adam Monsen is a seasoned software engineer and FLOSS zealot. He lives with his wonderful wife and children in Seattle, Washington. He blogs semi-regularly at adammonsen.com.
- Three EU Industries That Need HPC Now
- Preseeding Full Disk Encryption
- William Rothwell and Nick Garner's Certified Ethical Hacker Complete Video Course (Pearson IT Certification)
- Chemistry on the Desktop
- FinTech and SAP HANA
- Two Factors Are Better Than One
- HOSTING Monitoring Insights
- Five HPC Cost Considerations to Maximize ROI
- Hodge Podge
- GRUB Boot from ISO