64-Bit JMP for Linux
The world's largest privately owned software company, SAS, was cofounded in 1976 by Dr James Goodnight and John Sall. They continue to run the company as CEO and Executive Vice President. Sall is also chief architect of SAS's statistical discovery software called JMP (pronounced “jump”), which he invented for the Macintosh in the late 1980s. It is a desktop statistical analysis program using exploratory graphics to promote statistical discovery. JMP was released for Windows in 1995 and has been available for 32-bit Linux since 2003.
SAS's version 6.1 release of JMP later in 2006 harnesses the vast computational power of 64-bit Linux, which is not only exciting news for JMP and Linux, but is also a milestone in statistical computing.
To understand the importance of a 64-bit version of JMP, let us contemplate the purpose and history of statistical analysis.
Ultimately, the purpose of statistics is to make sense out of too much information. For example, the only possible way to digest the results of the United States census data every ten years, with its dozens of measurements on 275-million people, is by reducing it to statistical conclusions, such as the average household income by county and median age by city or neighborhood. Nobody could possibly look at the raw census data and draw a meaningful conclusion beyond “the United States has a large, diverse population”.
The problem is that there are hundreds and thousands of statistical measures—in fact, SAS has already spent 30 years extending and refining its analytical capabilities and doesn't see any end in sight. Learning what techniques to use for which real-world situations can take years, and developing the insights to proceed effectively from raw data to knowledge can take a lifetime. This is what led John Sall to develop JMP in the late 1980s. Inspired by the way the Macintosh made desktop computing accessible to a whole new audience by introducing a graphical user interface, Sall realized he could make statistics accessible to a wider audience by making the analysis process visual.
Comprehending the meaning buried in pages of statistical test results—p-values, standard deviations, error terms, degrees of freedom and on and on—is a mind-boggling task even for experts, but Sall knew that just about anyone could look at a well-drawn graph and understand things about his or her data. JMP always leads every analysis with graphs, so that researchers needn't waste time poring over statistics when those graphs make it intuitively obvious whether they are on the right analysis path or not. JMP also groups related analyses together and presents them in the order a researcher would need them in the course of a sound data exploration process. Researchers do not have to wrack their brains to remember which procedure might be helpful next. Instead, JMP provides the tools that are appropriate at each stage. Further, all of JMP's graphs and data tables are dynamically linked, so that users can point and click to select points in a graph or bars in a histogram and instantly see where those points are represented in all other open graphs and data tables.
Setting aside for a moment what it takes to understand statistics, consider what it takes to calculate statistics. For a researcher to compute a standard deviation on thousands of observations using only a pencil and paper could take weeks or months.
When he created SAS in the early 1970s, Jim Goodnight's idea was to store all that data in a file and then write procedures that could be used and reused to compute statistics on any file. It's an idea that seems ludicrously simple today, but it was revolutionary at the time. The agricultural scientists using SAS could perform calculations over and over again on new data without having to pay for computer scientists to write and rewrite programs. Instead of taking weeks, these computations took hours. Fast-forward 30 years, and modern statistical software can do these calculations on hundreds of thousands of rows, instantaneously.
When it took months to compute simple descriptive stats, researchers often didn't get much further before they'd burned through their grant money. Now that the basics take seconds, researchers can dig much deeper, and thus the science and practice of statistics have evolved along with computing power.
For the last decade, desktop computing has been built on operating systems such as Windows, Linux and Mac that rely on 32-bit memory addressing. Accordingly, desktop applications have operated within the computational limits implied by this architecture. In practical terms, this meant statistical programs like JMP that load the entire dataset into RAM before performing any computations were limited to about a million rows of data. They couldn't handle the large-scale problems confronting researchers today. Geneticists are probing 3-billion base pairs of DNA. Semiconductor manufacturers are squeezing millions of transistors onto ever-tinier chips. Pharmaceutical companies comb through thousands of potentially therapeutic properties on countless known and theoretical compounds.
Dr Richard C. Potter, Director of Research and Development for JMP Product Engineering, was responsible for porting JMP from Macintosh to Windows and later from Windows to Linux, in collaboration with Paul Nelson, lead Linux System Developer. Potter says:
JMP's 64-bit Linux release lifts this limit dramatically. Now JMP can move beyond the confines of the 32-bit addressing memory limit to a theoretical limit of 16 exabytes, which would allow JMP to work on two-billion rows of data. The 64-bit Linux release of JMP is also multithreaded, and the size and complexity of the problems someone can solve using JMP is mind-boggling.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide