Part of the success of Linux is due to its commission to standards. One of the first standards for Unix-like operating systems was POSIX.1 (IEC/ISO 9945-1:1990 or IEEE Std. 1003.1-1990), which specifies the system services, the interface and system limits. It has been adopted by all major Unix vendors since its introduction. Higher levels like XPG4 from X/Open (a group of computer vendors) are upwardly compatible with POSIX.1. Finally, once an operating system is branded for Single Unix (or Spec 1170) it may be officially named Unix (TM) (a name which is controlled by X/Open).
Fortunately the design of Linux was aimed at POSIX.1, so nearly all necessary functionality had been implemented from the beginning; however, it needed testing.
Our primary goal at Unifix was a standard called Federal Information Processing Standard (FIPS) 151-2 from the National Institute of Standards and Technology (NIST), a U.S. Government institute. FIPS 151-2 requires some features that are optional in POSIX.1; thus, FIPS 151-2 includes POSIX.1 and more. We intended to get a certification for Linux on Intel platforms.
Although usually linked to programming languages, ANSI-C (ISO/IEC 9899:1990) is a must for FIPS 151-2, and this was the first standard to meet. Rüdiger Helsch from Unifix began to clean up header files (namespace pollution issues) and fix the math library to ensure full ANSI-C conformance. Testing was done using our own tools.
In Fall 1995 we acquired the test suite for FIPS 151-2 from NIST. The test procedures are defined in IEEE Std 1003.3-1991 and 2003.3-1992. The first differences were found when compiling the test programs. At a later stage the generated reports showed where tests had failed. In the following months we did a lot of kernel, libc and test program recompiles (more than 80 kernel compiles). Don't try that on a 386 SX with 4 megs! Most fixes had to be done in exit.c and in the termios package. After roughly 250 fixes in our system, and two fixes in the test programs, NISTs bin/verify reported no more non-compliant behaviour. We felt some pride at that point but were not finished yet. Rüdiger wrote the mandatory POSIX Conformance Document, where all system limits and characteristics are specified. Hint: there is an easy way to check for POSIX.1 compliance; a system without these docs is never compliant.
Unifix is located in Braunschweig, Germany. and our independent testing laboratory is located in the U.S. So we had to transfer our modified Linux along with instructions for setting up a test PC to reproduce our test results. The lab did a completely new testing and is responsible for compliance afterwards. They were not allowed to use any pre-run test results, so everything had to be done from scratch. After some long-distance calls, all configuration mismatches had been ironed out (the very last problem was a suitable serial loopback cable), and the tests ran successfully. We entered the product at that point under the name Linux-FT and our newly founded company as Open Linux Ltd. (an X/Open member).
To see that all went well, we e-mailed to POSIX@nist.gov with topic send 151-2reg. The mailrobot returned a list with all certified products, one of which was our system.
Was it worth it? It took considerable money and effort to get to this point. Our partner from the UK, Lasermoon, supported us financially and logistically. We are convinced we have gained much more stability and portability through the certification process. Signal handling improved considerably. A lot of small quirks and flaws scattered throughout the sources have been fixed. Most of those ugly #ifdef linux hacks in applications are disappearing. For application developers and porters these advantages are obvious. Linux-FT is now available and contains all source code (as ensured by GPL).
Yes, we will do more certifications. POSIX.2 and XPG4 Base are the next stages, and finally the Single Unix branding. We are currently working on them and we hope our current product will enable us to reach XPG4 certification this summer. In the long term we intend our POSIX.1 changes to flow back into the mainstream kernels and libs (see the math lib, for example). The Linux 2.0 kernel sources will probably be run through our test suite before release.
Heiko Eifeldt (email@example.com) works at Unifix GmbH, Braunschweig, Germany.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds