Open Source/Open Science 1999
Brookhaven National Laboratory (BNL), Long Island, NY, hosted the first conference to deal directly with open source and science. The conference was appropriately titled “Open Source/Open Science”.
There are many similarities between the Open Source (or Free Software) movement and the scientific method, having in common the free exchange of ideas. Publishing one's theories and experimental results to confirm or rebuke a given idea on the mechanisms of Mother Nature is one of the methods used to advance science. Posting your software on an FTP site for others to download, criticize (or compliment) and return enhancements and bug fixes to be included in the source-code base follows the lines of the open aspect of scientific research.
Because of this close parallel and the fact that scientists strive to use the latest technology available and often invent new technologies to advance their research, a conference on the subject of open-source software and its use in science was considered appropriate. The fact is, scientists have been relying on GNU/open-source/free software for a long time now. Growth of its use and reliance on this software is escalating each year.
The conference was held on October 2, 1999 on the Laboratory campus. The first goal was to highlight the use of open-source software in science; the second was to encourage the private domain to contribute their software technology to the open-source code base. Finally, it was deemed appropriate to couple this event with a public relations outreach to the local and national community by informing them of the exciting work going on at BNL. Since open-source software can be used by anyone from grade-school kids running Linux on their PCs to weather-forecasting supercomputers, this proved to be a great opportunity for the public to relate to the science done at BNL and other national laboratories and universities nationwide.
During the day, a single track of invited talks was given. Complementing these talks were tours of the research facilities where open-source software plays a major role. They include two of the four RHIC (Relativistic Heavy Ion Collider) detectors, the Information Technology Division, the Neuroimaging Research Facility and the National Synchrotron Light Source. A call for abstracts was issued for open-source software projects used in research; 14 abstracts were submitted. A room full of X terminals served off a Linux PC was used to demonstrate each. abstracts.
The invited talks were broken into four groups. The first one was called “Introduction to Open Source and Open Science”. Bruce Perens talked about open source, and Dan Gezelter, a chemist who directs the Open Science project at Notre Dame and is in charge of the http://www.openscience.org/ web site, gave an introduction to his open-science project.
The second set focused on large-scale computing. Yuefan Deng started this section by describing the modest Galaxy Beowulf cluster built at the State University of New York, Stony Brook. This was followed by Tom Throwe of BNL with a talk on the RHIC Computing Center (RCF). The goal of the RCF is to amass 1000 Intel nodes to process in “real time” the data generated by RHIC's four detectors at a continuous rate of 60MB/second. Kent Koeninger, from SGI, gave a talk on the open sourcing of the XFS journaling file system—a key component needed for large-scale computing facilities like the RCF, which will be processing petabytes (a million gigabytes) of data on a yearly basis. Malcolm Capel of BNL's biology department talked about the use of Mosix to manage a 16-node Linux farm. These PCs are used to decode genetic structures from data collected through X-ray diffraction techniques at the National Synchrotron Light Source.
The third set was dedicated to analysis and visualization software. Bill Horn from IBM talked about OpenDX and how it became open-source software. Jon Leech of SGI presented the open-source efforts on OpenGL and GLX. Mark Galassi of Los Alamos National Laboratory presented his work on the GNU Scientific Library. Finally, Bill Rooney of BNL closed this section by presenting the open-source efforts in medical-imaging research.
The final section focused on the political arena. Jon “maddog” Hall titled his talk “It ain't Open'Til It's Open”. Immediately following, he moderated a panel discussion on “overcoming the obstacles faced by the Department of Energy (DOE), the National Science Foundation and national facilities like BNL in using and contributing to open-source technologies”. Panelists included Larry Augustin of VA Linux, Oggy Shentov of Pennie and Edmonds, a NYC law firm specializing in intellectual-property law, Michael Johnson of Red Hat, Bruce Perens of technocrat.net and Fred Johnson from the DOE. The panel discussion proved to be a lively one. Fred Johnson stated he was taking close notes of all that transpired and would be reporting back to the DOE.
The conference was deemed a success by all. Over 200 people were in attendance. Vendors displayed their computer equipment, software and services. Red Hat and VA Linux provided major sponsorship funds. Several members of the Laboratory directorate attended the meeting. One of them told me a few days later that he thought the event would be looked back on as a turning point in the Open Source movement in science.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Interview with Patrick Volkerding
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide