Use Python for Scientific Computing
which would give us a 500x500 element matrix initialized with zeros. We access the real and imaginary parts of each element using:
which would set the value 1+2j into the [0,0] element.
There also are functions to give us more complicated results. These include dot products, inner products, outer products, inverses, transposes, traces and so forth. Needless to say, we have a great deal of tools at our disposal to do a fair amount of science already. But is that enough? Of course not.
Now that we can do some math, how do we get some “real” science done? This is where we start using the features of our second package of interest, scipy. With this package, we have quite a few more functions available to do some fairly sophisticated computational science. Let's look at an example of simple data analysis to show what kind of work is possible.
Let's assume you've collected some data and want to see what form this data has, whether there is any periodicity. The following code lets us do that:
import scipy inFile = file('input.txt', r) inArray = scipy.io.read_array(inFile) outArray = fft(inArray) outFile = file('output.txt', w) scipy.io.write_array(outFile, outArray)
As you can see, reading in the data is a one-liner. In this example, we use the FFT functions to convert the signal to the frequency domain. This lets us see the spread of frequencies in the data. The equivalent C or FORTRAN code is simply too large to include here.
But, what if we want to look at this data to see whether there is anything interesting? Luckily, there is another package, called matplotlib, which can be used to generate graphics for this very purpose. If we generate a sine wave and pass it through an FFT, we can see what form this data has by graphing it (Figures 1 and 2).
We see that the sine wave looks regular, and the FFT confirms this by having a single peak at the frequency of the sine wave. We just did some basic data analysis.
This shows us how easy it is to do fairly sophisticated scientific programming. And, if we use an interactive Python environment, we can do this kind of scientific analysis in an exploratory way, allowing us to experiment on our data in near real time.
Luckily for us, the people at the SciPy Project have thought of this and have given us the program ipython. This also is available at the main SciPy site. ipython has been written to work with scipy, numpy and matplotlib in a very seamless way. To execute it with matplotlib support, type:
The interface is a simple ASCII one, as shown in Figure 3.
If we use it to plot the sine wave from above, it simply pops up a display window to draw in the plot (Figure 4).
The plot window allows you to save your brilliant graphs and plots, so you can show the entire world your scientific breakthrough. All of the plots for this article actually were generated this way.
So, we've started to do some real computational science and some basic data analysis. What do we do next? Why, we go bigger, of course.
So far, we have looked at relatively small data sets and relatively straightforward computations. But, what if we have really large amounts of data, or we have a much more complex analysis we would like to run? We can take advantage of parallelism and run our code on a high-performance computing cluster.
The good people at the SciPy site have written another module called mpi4py. This module provides a Python implementation of the MPI standard. With it, we can write message-passing programs. It does require some work to install, however.
The first step is to install an MPI implementation for your machine (such as MPICH, OpenMPI or LAM). Most distributions have packages for MPI, so that's the easiest way to install it. Then, you can build and install mpi4py the usual way with the following:
python setup.py build python setup.py install
To test it, execute:
mpirun -np 5 python tests/helloworld.py
Joey Bernard has a background in both physics and computer science. This serves him well in his day job as a computational research consultant at the University of New Brunswick. He also teaches computational physics and parallel programming.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Introduction to MapReduce with Hadoop on Linux
- Help with Designing or Debugging CORBA Applications
- New Products
- Returning Values from Bash Functions
- Linux Systems Administrator
- Welcome to 1998
10 min 52 sec ago
- notifier shortcomings
34 min 34 sec ago
2 hours 11 min ago
- Android User
2 hours 13 min ago
- Reply to comment | Linux Journal
4 hours 6 min ago
6 hours 55 min ago
- This is a good post. This
12 hours 8 min ago
- Great, This is really amazing
12 hours 10 min ago
- These posts are really good
12 hours 12 min ago
- It’s a really great site you
12 hours 14 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?