Porting Scientific and Engineering Programs to Linux
The relatively exorbitant cost of hardware and compilers necessary to put together a fully functional engineering workstation puts them out of reach of most students or professionals, who desire a workstation at home. Even for large companies cutting costs without loss of efficiency is desirable. With the advances in Intel processor speed and the ever growing software base available for Linux, the combination presents a good solution for the low cost workstation.
For engineers and scientists a full featured FORTRAN 77 compiler is a must. Additionally, a readily accessible library of numerical functions and subroutines is needed. Full featured FORTRAN 77 compilers regardless of platform, such as Lahey's compiler for DOS and Sun's compiler for SPARCstations, are priced at around eight-hundred American dollars. Commercial numerical libraries, such as IMSL, can cost thousands of dollars.
Readily available under Linux are a couple of free FORTRAN 77 options. The first is f2c, a FORTRAN to C source code converter. The other is g77, a FORTRAN compiler produced by the GNU project. The limitations of f2c leave g77 as the only viable alternative for compiling large complex applications written in FORTRAN 77 under Linux. Short pieces of FORTRAN code were written as needed for particular engineering tasks and were compiled with g77 to test the compiler abilities as a tool for day to day engineering activities. It was found that g77 produced efficient binaries and behaved as a good FORTRAN compiler should.
The next step was to compile a more significant piece of engineering code that is commonly run on workstations in the ten to twenty thousand dollar range. As radiation safety professionals, a code frequently used is MCNP, that implements the “Monte Carlo” method to transport neutral particle radiations. (The current version of MCNP is 4A; 4B will be released soon.) MCNP represents hundreds of man years of coding and is considered the best available technology for this type of calculations. Like most science and engineering code packages with long development histories, MCNP is written predominantly in FORTRAN 77, and language conversion is not considered a reasonable development step.
One thing that makes porting this code to a new platform somewhat challenging is that it is a safety related, pedigreed code, so one cannot just start changing things. Any changes must be made using patch files with the provided implementation utility. This utility is called PRPR and is also written in FORTRAN 77. PRPR is relatively simple code. The first step in porting MCNP to Linux was to compile PRPR, without modification, using g77. This operation was successful—an encouraging but misleading fact. The rest of the package was not as cooperative.
MCNP had already been ported to numerous platforms, everything from VMS to UNICOS. The first obvious approach was to try the patch files provided for similar platforms. None were successful, but these attempts did result in numerous educational error messages.
An initial hurdle to overcome was the absence of the fsplit command in Slackware distributions. This problem was resolved by getting fsplit from a BSD distribution.
The second hurdle was the absence of some VAX FORTRAN extensions, specifically those related to signal handling, date and time calls and execution timing. Although not part of the FORTRAN 77 standard, they are included with many of the commercial FORTRAN compilers. The necessary functions of most of the VAX FORTRAN extensions are simple enough to replicate with short C routines that can be linked with the FORTRAN objects when the executable is produced. The MCNP source includes a C routine to handle execution time.
Once the VAX FORTRAN extension problems had been solved, the remaining tens of error messages were related to problems with integrating the FORTRAN and C objects. With MCNP plotting routines written in ANSI C, the integration of objects is necessary to generate a fully functional executable. The errors result from incorrect syntax of subroutine declarations, which varies from one FORTRAN and C compiler pair to another.
Once the executable binary had been produced successfully—that is, a compile completed with no errors—the provided test problems were run. Test problems are provided with safety related codes, so that the user can verify not only that they compiled a binary, but also that the binary produced reasonable answers. A comparison of the test problem output from the Linux binary to the supplied output revealed only differences attributable to differences in architecture. The machine that produced the standard was a SPARCstation 5, and the first Linux box to run the MCNP binary was a 75 MHz Pentium.
Since that first platform, MCNP Version 4A has been complied and run on 100 and 200 MHz Pentiums. While the Pentiums are not quite as fast as a SPARCstation 20/151 (150 MHz HyperSPARC), they are quite fast when their cost is considered relative to the SPARCstation. Note that the referenced HyperSPARC module alone costs over four thousand dollars.
The PRPR implemented patch files necessary to compile MCNP Version 4A under Linux are shown in Listing 1 and 2. Note that both the FORTRAN and the C patch are necessary. It should also be noted that the best MCNP graphical behavior occurs when running under the OpenLook Window manager.
While we do not expect that all scientists and engineers use MCNP, we do hope that this documentation of the MCNP port will be helpful to those looking for an inexpensive workstation to run their code.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
- BASH script to log IPs on public web server
1 hour 54 min ago
5 hours 30 min ago
- Reply to comment | Linux Journal
6 hours 3 min ago
- All the articles you talked
8 hours 26 min ago
- All the articles you talked
8 hours 29 min ago
- All the articles you talked
8 hours 31 min ago
12 hours 55 min ago
- Keeping track of IP address
14 hours 46 min ago
- Roll your own dynamic dns
20 hours 20 sec ago
- Please correct the URL for Salt Stack's web site
23 hours 11 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?