The Past and Future of Linux Standards
In addition to formal standards like POSIX and ANSI C, one of the most prevalent types of standards used by Linux systems is a common implementation. The best example, of course, is the Linux kernel. No written specification is available for many of the interfaces provided by the kernel. However, no specification is needed because only one strain of kernel is accepted by everyone (not just all distributions or all developers, but truly everyone). Want to know the standard? Read the kernel source.
While not always quite as clear-cut, many more examples of common implementation standards are accepted by the entire Linux community, or at least a large chunk of it. Take the utilities and programs provided by the GNU Project. For example, it has long been accepted that the utility find is the enhanced GNU version, not the BSD one or some other version. If you are writing an application for Linux only, it is probably safe to rely on most of the functionality provided by GNU find. Because so many GNU utilities like this are used, Richard Stallman, founder of the GNU Project, insists that people refer to Linux as “GNU/Linux” or as a “Linux-based GNU system”. This tends to upset many people, but most developers seem to recognize Stallman's immense contribution and don't make a big fuss about it, even if most of them still say “Linux”.
The GNU project's share in the success of Linux does not end there. Linux has now standardized on the GNU C library (glibc) as the common implementation for future Linux systems. The glibc project aims for compliance with several standards. According to Ulrich Drepper, the GNU libc maintainer, all of the important standards are supported as long as they don't conflict with common sense. That list includes ISO C 90, ISO C 9x, POSIX and UNIX 98.
What if there is not a common implementation, a formal standard, or even an informal, de facto or ad hoc standard? One thing you can do is try to pick the best and most common practices and base a standard on them.
The Filesystem Hierarchy Standard (FHS) was among the first written standards developed specifically for Linux. The FHS aims to standardize the locations where files and directories are placed in the system. Standard locations are needed so that applications can be compiled and run well on different Linux distributions; it is also helpful for developers. If you want to write documentation for Linux, or if you need to work on more than one variety of Linux, it is invaluable knowing you can expect to find important files and directories in a standard location.
If you are wondering why a standard such as POSIX could not be adopted, the reason is one did not exist. Each UNIX vendor had its own solution, and there was a great deal of overlap, but the specifications and rationale for each vendor's layout were either lacking or nonexistent. Unfortunately, no multi-vendor standard was available for a standard layout of files and directories.
The FHS was started in 1993. At the time, the Linux OS was approximately two years old. As he does today, Linus dictated how things would work in the Linux kernel and did not exert much influence on other areas of the Linux operating system. With no central authority for those other areas, a number of Linux distributions had considerably different layouts. Some were like the BSD operating system, others were more like SunOS, and others were different still. (Incidentally, most of the names of the big distributions back then were different from the ones we hear today: Debian, Linux/PRO, MCC, Slackware, SLS, TAMU, Yggdrasil, etc.)
The first version of the FHS was based on ideas from SunOS 4, SVR4, 4.3BSD, 4.4BSD, SunOS 4, HP-UX and many other UNIX systems, but it was also based as much as possible on common practices distilled from the various Linux distributions existing at the time. The completed specification attempted to take the best of each file-system layout and combine them into a solution that worked well.
Since then, subsequent releases of the FHS have been refinements of the original specification, importing a few additional ideas from other areas (including 4.4BSD and SVR4).
Today, most Linux distributions follow FSSTND 1.2 (the precursor to FHS 2.0). They do this not because they are forced to, but because it cleanly addresses a problem they were having. FHS 2.1 should be available by the time this article is published.
The widespread availability of third-party commercial software for Linux is a relatively new phenomenon. Many vendors are faced with a problem they have never seen before.
At one time, all the software on your Linux box generally came from one of two places. Either it came with the distribution, or someone (you, a friend, maybe even a system administrator) compiled it from the source and installed it locally. Distributions are very good at integrating software and making sure everything works together. Assuming someone had the expertise, it is possible to get even a sub-standard program compiled and installed for your Linux boxes.
What if someone else wants to give you a pre-compiled binary package to install on your system? They are forced to consider which distribution you run, what version of the distribution, your kernel version, and a slew of other considerations before they can be reasonably certain that the application will install correctly on your system and work. If the goal is to provide a binary package for everyone, it is even worse. You must tailor the application for any and all distributions, and then compile, build and test it on each one, too. Is it any surprise that few vendors support even three or four of the major Linux distributions for their applications?
One of the more common fears about Linux is that fragmentation will occur because it uses an open-development model. As in real life, fragmentation occurs when something breaks into many different pieces which are no longer connected. Fragmentation is not a new phenomenon in the free-software community. Sometimes it happens for good reasons, perhaps when a maintainer is not doing a good job. Sometimes it happens for less worthy reasons, such as a personality conflict between lead developers.
This has happened before. A graphical-oriented version of the GNU Emacs editor called XEmacs split off of the version maintained by the Free Software Foundation (FSF) because of technical differences between the FSF and the developers of Lucid Emacs (the predecessor to XEmacs). The XEmacs FAQ does not take a rosy view of the situation:
There are currently irreconcilable differences in the views about technical, programming, design and organizational matters between RMS [Richard Stallman] and the XEmacs development team which provide little hope for a merge to take place in the short-term future.
If you have a comment to add regarding the merge, it is a good idea to avoid posting to the newsgroups, because of the very heated flame wars that often result.
When there is a split like this, it usually has the most impact on the end user. Add-on packages work with one variety of the software, but might not work with another. That is more or less what happened to GNU Emacs, although developers try to compensate for it as best they can.
A small amount of fragmentation, such as the difference between Linux distributions, is good because it allows them to cater to different segments of the community. Because Linux is Open Source, different distributions have the freedom to be unique. For example, the Extreme Linux distribution targets high-performance clusters of Linux PCs running the Beowulf clustering software. Rather than a single all-in-one distribution, each distribution can target a segment of the Linux community and try to meet that segment's needs better than any other distribution. Users benefit by getting a Linux distribution that more closely meets their needs than is possible under a single distribution model.
Also important is the attitude about fragmentation in the Linux community. Everyone is concerned that fragmentation could become a problem, and wants to ensure that applications can run on any variety of Linux. That is where the Linux Standard Base (LSB) is involved. The LSB project is working to define a common subset of Linux that everyone can count on, independent of the distribution. By defining only what can be expected in a minimal base Linux system, the LSB is attempting to find a balance between stifling Linux development and the possibility of Linux fragmenting into several totally incompatible versions.
The main problem the LSB is addressing is that software vendors must port and test their software on multiple Linux distributions, because even a small difference between distributions can result in major problems for the software when the vendor's software has been based on the behavior of one distribution. The difficulties ultimately affect everyone: users, developers, application vendors, Linux companies, et al.
Aside from the danger of fragmentation (which, by the way, Linux has avoided for the last eight years without an LSB), there are secondary dangers, namely FUD—fear, uncertainty, and doubt. The LSB hopes that by making fragmentation a remote possibility, it will help bolster confidence and win support for Linux from even the most conservative sectors. How will it do that? The Linux Standard Base Goals are as follows:
Enable software applications to run on any LSB-compliant distribution.
Increase compatibility among Linux distributions.
Help support software vendors and developers to port and write software for Linux.
Those are the things the LSB developers want to do. On the other hand, we want to avoid some things. Our LSB Guidelines for Success include:
Don't tread on distributions—everyone wants them to be unique.
Do no more than what is required to solve the basic problem of application portability.
Don't break old systems or prevent future advances.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Petros Koutoupis' RapidDisk
- ServersCheck's Thermal Imaging Camera Sensor
- The Italian Army Switches to LibreOffice
- Linux Mint 18
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Privacy and the New Math
- Firefox 46.0 Released
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide