Use Python for Scientific Computing
which will run a five-process Python and run the test script.
Now, we can write our program to divide our large data set among the available processors if that is our bottleneck. Or, if we want to do a large simulation, we can divide the simulation space among all the available processors. Unfortunately, a useful discussion of MPI programming would be another article or two on its own. But, I encourage you to get a good textbook on MPI and do some experimenting yourself.
Although any interpreted language will have a hard time matching the speed of a compiled, optimized language, we have seen that this is not as big a deterrent as it once was. Modern machines run fast enough to more than make up for the overhead of interpretation. This opens up the world of complex applications to using languages like Python.
This article has been able to introduce only the most basic available features. Fortunately, many very good tutorials have been written and are available from the main SciPy site. So, go out and do more science, the Python way.
numpy and scipy are not the only options available to Python programmers. Another popular package is ScientificPython. It includes geometric types (such as vectors, tensors and quaternions), polynomials, basic statistics, derivatives, interpolation and more. This is the same type of functionality available in scipy. The major difference is that ScientificPython has the ability to do parallel programming built in, whereas scipy requires an extra module. This is done with a partial implementation of MPI and an implementation of the Bulk Synchronous Parallel library (BSPlib).
LAPACK and BLAS
The argument can be made that comparing the complexity of C and FORTRAN to that of Python is unfair, because we actually are using add-on packages in Python. Equivalent libraries can be used in C and FORTRAN, with LAPACK and BLAS being some of the more popular. BLAS provides basic linear algebra functions, while LAPACK builds on these to provide more complex scientific functions. Although these libraries provide optimized routines that will extract every useful cycle from your hardware and are much simpler to write than straight C or FORTRAN, they still are orders of magnitude more complex than the equivalent in Python. If you really do need to squeeze out every last tick from your machine, however, nothing will beat these types of libraries.
Types of Parallel Programming
Parallel programs can, in general, be broken down into two broad categories: shared memory and message passing. In shared-memory parallel programming, the code runs on one physical machine and uses multiple processors. Examples of this type of parallel programming include POSIX threads and OpenMP. This type of parallel code is restricted to the size of the machine that you can build.
To bypass this restriction, you can use message-passing parallel code. In this form, independent execution units communicate by passing messages back and forth. This means they can be on separate machines, as long as they have some means of communication. Examples of this type of parallel programming include MPICH and OpenMPI. Most scientific applications use message passing to achieve parallelism.
Python Programming Language—Official Web Site: www.python.org
ScientificPython—Theoretical Biophysics, Molecular Simulation, and Numerically Intensive Computation: dirac.cnrs-orleans.fr/plone/software/scientificpython
Joey Bernard has a background in both physics and computer science. Finally, his latest job with ACEnet has given him the opportunity to use both degrees at the same time, helping researchers do HPC work.
Joey Bernard has a background in both physics and computer science. This serves him well in his day job as a computational research consultant at the University of New Brunswick. He also teaches computational physics and parallel programming.
|Happy Birthday Linux||Aug 25, 2016|
|ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs||Aug 24, 2016|
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
|What I Wish I’d Known When I Was an Embedded Linux Newbie||Aug 18, 2016|
|Pandas||Aug 17, 2016|
- Puppet and Nagios: a Roadmap to Advanced Configuration
- Writing a Simple USB Driver
- Android Candy: Hire a Cerberus to Find Your Phone
- Grundig TV-Communications
- Book Review: Red Hat Linux 6 in Small Business
- Rebuilding a Laptop Battery
- Learning to Program the Arduino
- Creating and Theming a Custom Content Type with Drupal 7
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Real-Time Rogue Wireless Access Point Detection with the Raspberry Pi
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide