Part of the success of Linux is due to its commission to standards. One of the first standards for Unix-like operating systems was POSIX.1 (IEC/ISO 9945-1:1990 or IEEE Std. 1003.1-1990), which specifies the system services, the interface and system limits. It has been adopted by all major Unix vendors since its introduction. Higher levels like XPG4 from X/Open (a group of computer vendors) are upwardly compatible with POSIX.1. Finally, once an operating system is branded for Single Unix (or Spec 1170) it may be officially named Unix (TM) (a name which is controlled by X/Open).
Fortunately the design of Linux was aimed at POSIX.1, so nearly all necessary functionality had been implemented from the beginning; however, it needed testing.
Our primary goal at Unifix was a standard called Federal Information Processing Standard (FIPS) 151-2 from the National Institute of Standards and Technology (NIST), a U.S. Government institute. FIPS 151-2 requires some features that are optional in POSIX.1; thus, FIPS 151-2 includes POSIX.1 and more. We intended to get a certification for Linux on Intel platforms.
Although usually linked to programming languages, ANSI-C (ISO/IEC 9899:1990) is a must for FIPS 151-2, and this was the first standard to meet. Rüdiger Helsch from Unifix began to clean up header files (namespace pollution issues) and fix the math library to ensure full ANSI-C conformance. Testing was done using our own tools.
In Fall 1995 we acquired the test suite for FIPS 151-2 from NIST. The test procedures are defined in IEEE Std 1003.3-1991 and 2003.3-1992. The first differences were found when compiling the test programs. At a later stage the generated reports showed where tests had failed. In the following months we did a lot of kernel, libc and test program recompiles (more than 80 kernel compiles). Don't try that on a 386 SX with 4 megs! Most fixes had to be done in exit.c and in the termios package. After roughly 250 fixes in our system, and two fixes in the test programs, NISTs bin/verify reported no more non-compliant behaviour. We felt some pride at that point but were not finished yet. Rüdiger wrote the mandatory POSIX Conformance Document, where all system limits and characteristics are specified. Hint: there is an easy way to check for POSIX.1 compliance; a system without these docs is never compliant.
Unifix is located in Braunschweig, Germany. and our independent testing laboratory is located in the U.S. So we had to transfer our modified Linux along with instructions for setting up a test PC to reproduce our test results. The lab did a completely new testing and is responsible for compliance afterwards. They were not allowed to use any pre-run test results, so everything had to be done from scratch. After some long-distance calls, all configuration mismatches had been ironed out (the very last problem was a suitable serial loopback cable), and the tests ran successfully. We entered the product at that point under the name Linux-FT and our newly founded company as Open Linux Ltd. (an X/Open member).
To see that all went well, we e-mailed to POSIX@nist.gov with topic send 151-2reg. The mailrobot returned a list with all certified products, one of which was our system.
Was it worth it? It took considerable money and effort to get to this point. Our partner from the UK, Lasermoon, supported us financially and logistically. We are convinced we have gained much more stability and portability through the certification process. Signal handling improved considerably. A lot of small quirks and flaws scattered throughout the sources have been fixed. Most of those ugly #ifdef linux hacks in applications are disappearing. For application developers and porters these advantages are obvious. Linux-FT is now available and contains all source code (as ensured by GPL).
Yes, we will do more certifications. POSIX.2 and XPG4 Base are the next stages, and finally the Single Unix branding. We are currently working on them and we hope our current product will enable us to reach XPG4 certification this summer. In the long term we intend our POSIX.1 changes to flow back into the mainstream kernels and libs (see the math lib, for example). The Linux 2.0 kernel sources will probably be run through our test suite before release.
Heiko Eifeldt (email@example.com) works at Unifix GmbH, Braunschweig, Germany.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide