Mainstream Parallel Programming
I hope that this article has shown how parallel programming is for everybody. I've listed a few examples of what I believe to be good areas to apply the concepts presented here, but I'm sure there are many others.
There are a few keys to ensuring that you successfully parallelize whatever task you are working on. The first is to keep network communication between computers to a minimum. Sending data between nodes generally takes a relatively large amount of time compared to what happens on single nodes. In the example above, there was no communication between nodes. Some tasks may require it, however. Second, if you are reading your data from disk, read only what each node needs. This will help you keep memory usage to a bare minimum. Finally, be careful when performing tasks that require the nodes to be synchronized with each other, because the processes will not be synchronized by default. Some machines will run slightly faster than others. In the sidebar, I have included some common MPI subroutines that you can utilize, including one that will help you synchronize the nodes.
In the future, I expect computer clusters to play an even more important role in our everyday lives. I hope that this article has convinced you that it is quite easy to develop applications for these machines, and that the performance gains they demonstrate are substantial enough to use them in a variety of tasks. I would also like to thank Dr Mohamed Laradji of The University of Memphis Department of Physics for allowing me to run these applications on his group's Beowulf cluster.
Some Essential and Useful MPI Subroutines
There are more than 200 subroutines that are a part of MPI, and they are all useful for some purpose. There are so many calls because MPI runs on a variety of systems and fills a variety of needs. Here are some calls that are most useful for the sort of algorithm demonstrated in this article:
MPI_Init((void*) 0, (void*) 0) — initializes MPI.
MPI_Comm_size(MPI_COMM_WORLD, &clstSize) — returns the size of the cluster in clstSize (integer).
MPI_Comm_rank(MPI_COMM_WORLD, &myRank) — returns the rank of the node in myRank (integer).
MPI_Barrier(MPI_COMM_WORLD) — pauses until all nodes in the cluster reach this point in the code.
MPI_Wtime() — returns the time since some undefined event in the past. Useful for timing subroutines and other processes.
MPI_Finalize(MPI_COMM_WORLD) — halts all MPI processes. Call this before your process terminates.
Resources for this article: /article/9135.
Michael-Jon Ainsley Hore is a student and all around nice guy at The University of Memphis working toward an MS in Physics with a concentration in Computational Physics. He will graduate in May 2007 and plans on starting work on his PhD immediately after.
|Microsoft and Linux: True Romance or Toxic Love?||Nov 25, 2015|
|Non-Linux FOSS: Install Windows? Yeah, Open Source Can Do That.||Nov 24, 2015|
|Cipher Security: How to harden TLS and SSH||Nov 23, 2015|
|Web Stores Held Hostage||Nov 19, 2015|
|diff -u: What's New in Kernel Development||Nov 17, 2015|
|Recipy for Science||Nov 16, 2015|
- Microsoft and Linux: True Romance or Toxic Love?
- Non-Linux FOSS: Install Windows? Yeah, Open Source Can Do That.
- Cipher Security: How to harden TLS and SSH
- Web Stores Held Hostage
- PuppetLabs Introduces Application Orchestration
- Simple Photo Editing, Linux Edition!
- Firefox's New Feature for Tighter Security
- diff -u: What's New in Kernel Development
- November 2015 Issue of Linux Journal: System Administration
- It's a Bird. It's Another Bird!