Part of the success of Linux is due to its commission to standards. One of the first standards for Unix-like operating systems was POSIX.1 (IEC/ISO 9945-1:1990 or IEEE Std. 1003.1-1990), which specifies the system services, the interface and system limits. It has been adopted by all major Unix vendors since its introduction. Higher levels like XPG4 from X/Open (a group of computer vendors) are upwardly compatible with POSIX.1. Finally, once an operating system is branded for Single Unix (or Spec 1170) it may be officially named Unix (TM) (a name which is controlled by X/Open).
Fortunately the design of Linux was aimed at POSIX.1, so nearly all necessary functionality had been implemented from the beginning; however, it needed testing.
Our primary goal at Unifix was a standard called Federal Information Processing Standard (FIPS) 151-2 from the National Institute of Standards and Technology (NIST), a U.S. Government institute. FIPS 151-2 requires some features that are optional in POSIX.1; thus, FIPS 151-2 includes POSIX.1 and more. We intended to get a certification for Linux on Intel platforms.
Although usually linked to programming languages, ANSI-C (ISO/IEC 9899:1990) is a must for FIPS 151-2, and this was the first standard to meet. Rüdiger Helsch from Unifix began to clean up header files (namespace pollution issues) and fix the math library to ensure full ANSI-C conformance. Testing was done using our own tools.
In Fall 1995 we acquired the test suite for FIPS 151-2 from NIST. The test procedures are defined in IEEE Std 1003.3-1991 and 2003.3-1992. The first differences were found when compiling the test programs. At a later stage the generated reports showed where tests had failed. In the following months we did a lot of kernel, libc and test program recompiles (more than 80 kernel compiles). Don't try that on a 386 SX with 4 megs! Most fixes had to be done in exit.c and in the termios package. After roughly 250 fixes in our system, and two fixes in the test programs, NISTs bin/verify reported no more non-compliant behaviour. We felt some pride at that point but were not finished yet. Rüdiger wrote the mandatory POSIX Conformance Document, where all system limits and characteristics are specified. Hint: there is an easy way to check for POSIX.1 compliance; a system without these docs is never compliant.
Unifix is located in Braunschweig, Germany. and our independent testing laboratory is located in the U.S. So we had to transfer our modified Linux along with instructions for setting up a test PC to reproduce our test results. The lab did a completely new testing and is responsible for compliance afterwards. They were not allowed to use any pre-run test results, so everything had to be done from scratch. After some long-distance calls, all configuration mismatches had been ironed out (the very last problem was a suitable serial loopback cable), and the tests ran successfully. We entered the product at that point under the name Linux-FT and our newly founded company as Open Linux Ltd. (an X/Open member).
To see that all went well, we e-mailed to POSIX@nist.gov with topic send 151-2reg. The mailrobot returned a list with all certified products, one of which was our system.
Was it worth it? It took considerable money and effort to get to this point. Our partner from the UK, Lasermoon, supported us financially and logistically. We are convinced we have gained much more stability and portability through the certification process. Signal handling improved considerably. A lot of small quirks and flaws scattered throughout the sources have been fixed. Most of those ugly #ifdef linux hacks in applications are disappearing. For application developers and porters these advantages are obvious. Linux-FT is now available and contains all source code (as ensured by GPL).
Yes, we will do more certifications. POSIX.2 and XPG4 Base are the next stages, and finally the Single Unix branding. We are currently working on them and we hope our current product will enable us to reach XPG4 certification this summer. In the long term we intend our POSIX.1 changes to flow back into the mainstream kernels and libs (see the math lib, for example). The Linux 2.0 kernel sources will probably be run through our test suite before release.
Heiko Eifeldt (email@example.com) works at Unifix GmbH, Braunschweig, Germany.
|Preparing Data for Machine Learning||Apr 25, 2017|
|openHAB||Apr 24, 2017|
|Omesh Tickoo and Ravi Iyer's Making Sense of Sensors (Apress)||Apr 21, 2017|
|Low Power Wireless: 6LoWPAN, IEEE802.15.4 and the Raspberry Pi||Apr 20, 2017|
|CodeLathe's Tonido Personal Cloud||Apr 19, 2017|
|Wrapping Up the Mars Lander||Apr 18, 2017|
- Preparing Data for Machine Learning
- The Weather Outside Is Frightful (Or Is It?)
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- Understanding Firewalld in Multi-Zone Configurations
- Simple Server Hardening
- From vs. to + for Microsoft and Linux
- Gordon H. Williams' Making Things Smart (Maker Media, Inc.)
- Server Technology's HDOT Alt-Phase Switched POPS PDU
- Bash Shell Script: Building a Better March Madness Bracket
Pick up any e-commerce web or mobile app today, and you’ll be holding a mashup of interconnected applications and services from a variety of different providers. For instance, when you connect to Amazon’s e-commerce app, cookies, tags and pixels that are monitored by solutions like Exact Target, BazaarVoice, Bing, Shopzilla, Liveramp and Google Tag Manager track every action you take. You’re presented with special offers and coupons based on your viewing and buying patterns. If you find something you want for your birthday, a third party manages your wish list, which you can share through multiple social- media outlets or email to a friend. When you select something to buy, you find yourself presented with similar items as kind suggestions. And when you finally check out, you’re offered the ability to pay with promo codes, gifts cards, PayPal or a variety of credit cards.Get the Guide