SISAL: A Safe and Efficient Language for Numerical Calculations
SISAL turns out to be a very safe language to use. Compared to C, I invariably find that I am much farther along the road to a working program with SISAL when the code first compiles successfully. SISAL has many conventional safety features, such as strict type checking, prototyping of external functions and array bounds checking. Arrays carry information with them on array length and starting value of the array index. Multidimensional arrays are actually arrays of arrays, so bounds checking works individually for each index as well as for the array as a whole. Since bounds checking slows program execution, it can be turned off when debugging is finished.
SISAL derives much additional safety from its functional and single assignment natures. As I showed above, undefined alternatives in if statements are impossible in SISAL. I find that this has the effect of forcing one to think through an algorithm very thoroughly as one is coding the program. Thus, coding a SISAL program often takes more time than coding the first round of the equivalent C program, but the extra effort pays off a hundred-fold in the debugging stage and in a better understanding of the calculation.
Speaking of debugging, conventional debuggers such as gdb don't work well with SISAL, so the developers of the language provided its own debugger, sdbx. In addition, the “peek” function allows one to examine the value of variables during program execution.
In many ways SISAL is easier to debug than conventional languages, due to its functional and single assignment natures. However, certain behaviors take a bit of getting used to. For instance, in the function:
function polar(x, y: real; returns real) let r := sqrt(x*x + y*y); theta := atan(y/x); in r end let end function
all traces of the variable theta disappear. Since the computation of this variable contributes nothing to the final answer, r, the statement generating theta is dead code and is removed by the SISAL optimizer. This gives rise to an iron-clad rule in SISAL: if a calculation doesn't contribute to the final result, it is ruthlessly eliminated. If the value of theta is really needed in the above function, then it should be included in the output by changing the function definition to ...returns real, real) and the r in the in clause to r, theta. Otherwise the computation of theta should be deleted from the code.
If this were all there were to SISAL, it would be an elegant, but useless language. Since new variables are created for every assignment, any significant SISAL program would be one giant memory leak. This is particularly significant when it comes to arrays. In SISAL, a completely new array is apparently created each time an element of an array is changed!
I say apparently, because the back end of LLNL's SISAL compiler, osc, is quite good at optimizing away the horrid inefficiencies of single assignment semantics. This, in fact, is the major contribution of the SISAL development team. osc first converts the SISAL code into an intermediate form called “IF1”. This intermediate code is then extensively massaged before being converted into either C or Fortran code. Thus, SISAL is really a fancy C (or Fortran) preprocessor, which means that it is quite portable, in nonparallelizing form to architectures with a good C compiler, such as gcc. Virtually any UNIX or Linux system will compile SISAL code in single processor mode. The developers of osc claim that optimized SISAL code typically executes within 20% of the time required by the same algorithm coded directly in C or Fortran, and my experience with the language supports this claim. Interestingly, a fast Fourier transform routine written in SISAL by John Feo of LLNL actually ran faster in parallel mode on a Cray computer than Cray's own parallel fast Fourier transform routine, even though it was slightly slower in single processor mode.
Creating parallel SISAL code is somewhat more difficult. SISAL needs a C or Fortran compiler and an underlying operating system that implements parallelism in order to produce parallel code. Given the wide variety of parallel system types, and the variety of custom interfaces to these systems, porting SISAL to a new parallel system is not trivial. Thus, even though Linux now has symmetric multiprocessing available, I have not attempted to make SISAL use it.
Scientists and engineers often write computer programs that take advantage of libraries of useful code. These libraries are almost always written in Fortran. The importance of such libraries is often cited as a reason for not migrating away from Fortran.
SISAL bows to this reality by providing interfaces to both Fortran and C. The connection goes both ways; SISAL can call C and Fortran routines, and C and Fortran can call SISAL routines. The Fortran interface is more developed than the C interface, which is somewhat dismaying to Fortran-averse folk like me, but the good news is that C can easily be used to emulate Fortran code, thus allowing C programs to take advantage of the Fortran interface.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide