One Tale of Two Scientific Distros
Several weeks ago, I was flying west past Chicago, watching the ground slide by below, when I spotted the signature figure eight of the Fermi National Accelerator Laboratory, better known as Fermilab. I shot some pictures, which I put up at the Linux Journal Flickr pool (Flickr also uses Linux).
I figured Fermilab naturally would use Linux, and found that Fermilab has its own distro: Fermi Linux. Its public site provides a nice window into a highly professional and focused usage of Linux. Within Fermi Linux, specific generations are known as Scientific Linux Fermi, each with version numbers and the code names Charm, Strange, Top, Bottom, Up, Feynmann, Wilson and Lederman.
Some also have LTS in their names. LTS stands for Long Term Support. It has a FAQ. The first Q is, "What is Fermi Linux LTS?" The A goes:
Fermi Linux LTS (Long Term Support) is, in essence, Red Hat Enterprise, recompiled.
What we have done is taken the source code from Red Hat Enterprise (in srpm form) and recompiled it. The resulting binaries (now in rpm form) are then ours to do with as we desire, as long as we follow the license from that original source code, which we are doing.
We are choosing to bundle all these binaries into a Linux distribution that is as close to Red Hat Enterprise as we can get it. The goal is to ensure that if a program runs and is certified on Red Hat Enterprise, then it will run on the corresponding Fermi Linux LTS release.
A follow-up Q goes, "I really don't want to get into legal trouble, please convince me that this is legal." The A says:
What we are doing is getting the source rpm of each Red Hat Enterprise package from a publicly available area. Each of these packages, except for a few, have the GPL license. This license states that we can freely distribute that package. We are recompiling those packages without any change. Hence, we can freely distribute those rpms that were built....And although these rpms are basically identical to Red Hat's Enterprise Linux, they were built by us and are freely distributable. We can do with them what we want....
Although it is basically identical to Red Hat Enterprise Linux, it is, in essence, a completely different release, just with the same programs, packaged the same way.
Fermilab supports its own users and directs others toward Scientific Linux, which was codeveloped by Fermilab, CERN and other laboratories and universities. Troy Dawson is the primary contact for both Fermi Linux and Scientific Linux. On his own site, he explains, "Fermilab uses what is called Fermi Linux. It is now based on Scientific Linux. It is actually a site modification, so technically it is Scientific Linux Fermi. But we call all of the releases we have made Fermi Linux.”"
While Fermi Linux's version history starts with 5.0x in 1998, Scientific Linux's history starts with 3.0.1 in 2004. Both sites' current distribution version pages have near-identical tables of releases, dates and notes. The latest version for both is 5.x.
In a comment to an on-line Linux Journal article, William Roddy wrote, "Scientific Linux will work in any environment Red Hat would, and even better. It's a work of art and genius, and in the field of high-energy physics, if this Linux didn't work, it wouldn't be used. Yet, it is useful to anyone. If you demand stability and security, you will not do better. It will always be there and it will always be free."
Doc Searls is Senior Editor of Linux Journal
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
1 hour 12 min ago
- Please correct the URL for Salt Stack's web site
4 hours 24 min ago
- Android is Linux -- why no better inter-operation
6 hours 39 min ago
- Connecting Android device to desktop Linux via USB
7 hours 7 min ago
- Find new cell phone and tablet pc
8 hours 6 min ago
9 hours 34 min ago
- Automatically updating Guest Additions
10 hours 43 min ago
- I like your topic on android
11 hours 29 min ago
- This is the easiest tutorial
18 hours 5 min ago
- Ahh, the Koolaid.
23 hours 44 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?