SISAL: A Safe and Efficient Language for Numerical Calculations
The natural way to do input and output in a functional language is to funnel input through the argument list of the top-level function and funnel output through function return values. The SISAL developers took this conservative approach, designing a special input-output system called FIBRE. FIBRE represents primitive types, arrays, records, etc., in a special ASCII form. This form also contains information about the data, such as array size and record contents, in addition to the data itself.
Though having a certain elegance, this format is severely lacking if, for instance, you are trying to decipher the output of a global weather prediction model.
It is here that the SISAL interface to C and Fortran comes to the rescue. This interface can be used to perform any type of input and output desired by the user. However, this solution is somewhat inelegant, as it requires the user to delve into the grungy details of the inter-language interface. This can be a daunting experience to the average scientist or engineer, and it defeats the purpose of providing safe, efficient and easy-to-use computational tools.
This problem can be solved if flexible interfaces can be written between SISAL and widely used standard data formats. I have written just such an interface to the NetCDF library. NetCDF is a format and an application programming interface for storing and retrieving gridded numerical data. Developed by the UNIDATA program of the University Corporation for Atmospheric Research, it is rapidly increasing in popularity in the fields of meteorology and oceanography and is starting to spill over into other fields as well. Many tools currently exist to manipulate and display data in this format, with more appearing all the time. The NetCDF library is open-source software and is available from UNIDATA's web site. However, if you are lucky, your Linux distribution already has a prepackaged version of NetCDF available for installation—for instance, it is available in the Debian GNU/Linux distribution that I use.
Listing 1 shows a complete SISAL applicatin (solution of the heat transport equation in a long rod). NetCDF is used to write out the results. The time evolution of the temperature pattern along the rod is shown in Figure 1 using the GRI graphics package of Dan Kelley. (See the July 2000 issue of Linux Journal for an article on GRI.)
Around 1990 the SISAL folks at LLNL decided that SISAL was ready to be smoke-tested outside of the confines of Livermore and offered people time on a Livermore Cray in exchange for test-driving SISAL. I took them up on their offer and quickly became enthralled with the language. At the time I was working on Sun workstations, but eventually began a conversion to Linux on PCs. SISAL wouldn't compile on Linux at the time but the problems turned out to be relatively trivial, so with the help of Pat Miller at LLNL, we ported SISAL to Linux.
As a real smoke test of SISAL, I wrote a tropical weather research model that I still use, and that currently consists of 4,600 lines of SISAL code. I consider the osc compiler to be relatively robust, though undoubtedly there are still bugs that I haven't exercised.
The current version of my model talks to the outside world using a C interface to my Candis (C language analysis and display) package. Developing this interface contributed to the ease with which I was able to implement the NetCDF interface to SISAL.
Unfortunately, the SISAL project was canceled in 1997. However, with James McGraw's help, an open-source copyright was placed on the osc compiler source code. As one of the few current SISAL enthusiasts, I now maintain the web site for SISAL.
The SISAL project represents a relatively rare event in recent times, namely a significant effort to focus the talents of computer scientists on an issue of real importance to physical scientists and engineers. The result is an exceptionally interesting computer language for high-performance numerical computing, along with the hooks needed to make it truly useful to the targeted clientele.
Though the SISAL project ultimately failed to make a significant impact at LLNL, its transfer to the world of open source gives it another chance. What is needed are people willing to try it out and ascertain whether it will help them get their work done more effectively than the current alternatives. Also needed are knowledgeable people who are willing to fix problems and build on what has already been accomplished. An obvious extension would be to make SISAL use Linux SMP for parallel computation. A more ambitious objective would be to extend it to the distributed memory model of parallel computing, perhaps enabling it to make use of the Message Passing Interface (MPI) commonly used on massively parallel computers such as Linux Beowulf systems.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- Tech Tip: Really Simple HTTP Server with Python
- Weechat, Irssi's Little Brother
- Senior Perl Developer
- Validate an E-Mail Address with PHP, the Right Way
- Welcome to 1998
47 min 7 sec ago
- notifier shortcomings
1 hour 10 min ago
2 hours 47 min ago
- Android User
2 hours 49 min ago
- Reply to comment | Linux Journal
4 hours 42 min ago
7 hours 31 min ago
- This is a good post. This
12 hours 44 min ago
- Great, This is really amazing
12 hours 46 min ago
- These posts are really good
12 hours 48 min ago
- It’s a really great site you
12 hours 50 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?