April 2013 Issue of Linux Journal: High Performance Computing
When I was in college, there was a rich kid down the hall who had a computer with 16MB of RAM. Before you scoff, you need to think back to 1993. The standard amount of RAM in a new computer was 2MB, with 4MB being "high-end". Anyway, this kid's computer was amazingly fast because he could create a RAM disk big enough to contain Windows 3.1 completely, so the entire OS ran from RAM. It was the 1993 rich-kid version of an SSD.
Back then, the most intense computation I ever did on a computer was image rendering with POV-Ray. We all assumed the rich kid would blow us out of the water with his awesome computer running completely in RAM—except that he didn't. Although his computer was indeed the most responsive computer I'd ever seen, it didn't have a math coprocessor. My friend with the 386DX2-66 computer with 4MB of RAM could render POV-Ray images faster than anything I'd ever seen. And, that's when I first understood high-performance computing. Granted, HPC has changed through the years, but the concept remains the same—heavy-duty hardware for heavy-duty number-crunching. And, this month, we focus on HPC in the Linux world.
Reuven M. Lerner starts out the issue with Web security. Firewalls and intrusion detection can't protect you from poor coding, so it's important to develop with a security mindset, and Reuven provides some great information to that end. Next, Dave Taylor reminds us of the other side of what's important with programming: having fun. Dave continues his series on Cribbage and shows how complicated it is to program things that seem simple for humans to do in their heads.
Kyle Rankin helps solve a problem that pops up when taking a laptop back and forth from work to home. Plugging in a different external monitor can be frustrating when all of your programs and windows don't line up the same. If you add a monitor with a different aspect ratio, the frustration can be even greater. Kyle shows how he handles the problem on his own laptop.
In my Open-Source Classroom column, I tackle the issue of overly long URLs. Although it's perfectly acceptable to use a free URL shortener, things like Google shutting down Google Reader remind us that if we're depending on a free service, we can't complain when it goes away. I demonstrate a handful of ways to shorten URLs from your own hosted domain.
And finally, let's get to the meat of this issue—namely, HPC stuff. Adam Monsen describes how to use MapReduce with Hadoop on Linux. Grep is an amazing tool, but there are times when you need to trade in the grep Swiss Army knife for a chainsaw. Adam shows how. If you're facing the issue of grepping enough data to turn to MapReduce, perhaps you also should look into data deduplication. Jeramiah Bowling follows Adam's article with a piece on Opendedup. If you find yourself dedicating the majority of your drive space to storing redundant data, you'll want to read Jeramiah's study.
When you think of high-performance computing, it's unlikely that "Python" is the first language to pop into your head. Joey Bernard explores IPython and SciPy this month, which use parallel processing, bringing HPC functionality to Python code. When it comes to high-performance computing, terms like FIFO buffers, ring buffers and work queues are at least as important as the code being crunched. Alexander Krizhanovsky delves deep into the methods of scaling multicore environments to get the most out of your server farm.
David Strauss rounds out the HPC content with an interesting take on server containers replacing the concept of VMs. Although very much related to the concepts of cloud computing (SaaS, PaaS, IaaS), server containers might be the next logical step in computing solutions. Check out David's article and see what you think.
The HPC issue always makes my own server farm seem insignificant. Granted, most people don't have an entire server rack in their basements like I do, but still, I'm certainly not doing any high-performance computing down there. Computers are, however, becoming more and more powerful while at the same time shrinking physically. It wasn't so long ago that a math coprocessor was the miracle of modern computing. Who knows what tomorrow will bring. Thankfully, with Linux, we'll likely get to experience the newest and best technology early on, and for free. That's what happens when you run a high-performance operating system.
Available to Subscribers: April 1
|Dr Hjkl on the Command Line||May 21, 2015|
|Initializing and Managing Services in Linux: Past, Present and Future||May 20, 2015|
|Goodbye, Pi. Hello, C.H.I.P.||May 18, 2015|
|Enter to Win Archive DVD + Free Backup Solution||May 18, 2015|
|Using Hiera with Puppet||May 14, 2015|
|Urgent Kernel Patch for Ubuntu||May 12, 2015|
- Initializing and Managing Services in Linux: Past, Present and Future
- Dr Hjkl on the Command Line
- Goodbye, Pi. Hello, C.H.I.P.
- Using Hiera with Puppet
- Gartner Dubs DivvyCloud Cool Cloud Management Vendor
- Enter to Win Archive DVD + Free Backup Solution
- Mumblehard--Let's End Its Five-Year Reign
- Infinite BusyBox with systemd
- A More Stable Future for Ubuntu
- It's Easier to Ask Forgiveness...