The Humble Beginnings of Linux
The histories of many programming projects are maintained by oral tradition. After all, what real programmer would take the time to write down what has happened? Because much of Linux was developed by way of e-mail conversations on the net, a slightly more firm record exists. The following is gleaned from those records.
I first worked with Minix in Fall 1989. Dr. Tanenbaum's system was a perfect vehicle for working with operating systems for those who couldn't afford a VAX. It ran on an 8086 with 640 Kbytes and a floppy drive. You could run a few programs in a multi-tasking environment and, since you had the source, you could change the system to your heart's content.
“But wait,” you say, “Minix isn't Linux. What are you talking about?”
“I'm just setting the stage, bear with me a moment.”
The intended target for Minix was students of operating systems in a computer science curriculum. I used it in teaching an upper division class where the term projects were to “enhance the system in some meaningful way.” The projects varied from a serial port driver, to virtual terminals, to simple memory management. No one took the giant step that Linus Torvalds took at the University of Helsinki. (I wish I could say one of my students was changing the course of personal computing!)
As you may know, the memory model of the 8086 is very limiting. It had easy access to only 640 Kbytes of non-virtual memory. Ugh! But that was the target system for Minix because it was the most common and cheapest system available.
Linus rejected that argument and decided that one needed virtual memory to be able to do anything interesting. Thus, he reckoned that an 80386 was the minimum processor for his system.
His project was to build a kernel for a virtual memory, pre-emptive, multi-user system. It would have much the same user interface as Minix (in fact it used the same file system as Minix for some time) and that of Unix.
From the beginning, Linus made reference to the GNU portable kernel, Hurd, and made it clear that he wasn't planning to supplant Hurd. Since Hurd was expected to be available in late 1992, Linux was clearly just a hackers' delight.
By the time Linus conceived of his project in April 1991, Minix had changed to support the improved Intel processors, but there was still room for extension. Initially Linux was cast in terms of a Minix project, but by late summer the divergence was starting to show.
Early versions were labeled 0.01 (Sept. 91), 0.02 (Oct. 91), 0.03 (Nov. 91), etc., as a hint that they weren't really releases so much as snapshots of work in progress.
Linus gathered a few supporters who would exercise and enhance his work and who appreciated receiving (and contributing) fixes as quickly as they were developed. The kernel soon came to support all the system calls expected of a Unix kernel as more restrictions were removed.
Linus ported gcc and bash, so there was a basic compiler and command interpreter in place. (Although, to be precise, the compilations were done under Minix up to version 0.12.) There was some discussion in comp.os.minix about the wisdom of going off and starting another OS, but Linus had his dream, or, some would say, he was stubborn and he persisted.
By January 1992, the 0.12 version took only modest care to build and operate and, thus, contributed a lot towards popularizing Linux.
It should be noted that this was not the only free Unix system for home computers. 386BSD was being developed in California and was a derivative of the Berkeley Unix that had been widely distributed on university campuses around the world. To some extent, 386BSD was a benchmark against which Linux was compared.
At the same time, the various GNU tools were becoming well established in the Unix domain. The standard C compiler, gcc, was regularly found to be better than most vendors' compilers, and the other tools were generally more robust and feature-full than the vendor versions. The fitting of the GNU applications to the Linux kernel was natural and necessary to the success of Linux.
The growing community of Linux users were not afraid to build up a system from sources around the world. A second outside product, the X Window System, provided a GUI interface for Linux users with high-end displays. A third product, NetBSD, provided a springboard to get full Internet support for Linux.
The initial numbering scheme had some limitations, but questions such as “Does 0.11 come before or after 0.2?” were safely avoided and the numbering quickly arrived at a limiting value of 0.99. That version was widely distributed, and it was regarded as the first full-featured version of the Linux kernel. There was, by then, a sizeable community of users who depended on a stable version of the kernel. Although there were many patches and sub-patches to this version—often arriving daily—the basic version 0.99 was suitable for release.
The Great Release took place at the start of 1994, when Linus identified a stable patch level (0.99pl14r), cleaned up a few last problems, and called it good. This operation was called a “code freeze” and resulted in version 0.99pl15, which held steady long enough for bug fixes, but no enhancements, to arrive.
Part of the code freeze and the Great Release was the recognition that Linux had become a suitable foundation for production systems—systems devoted to doing useful work, instead of being the object of a programmer's machinations. This posed a dilemma: how could Linux continue to evolve and yet be stable?
The solution was simple: have two development paths starting from the same point. The even-numbered releases (1.0.0, 1.0.1, 1.0.2, etc.) followed a slow, careful evolution of a production release system and the odd-numbered releases (1.1.0, 1.1.1, 1.1.2, etc.) were to be the fast-changing, experimental system. Version 0.99pl15, with a few fixes, was the basis of these two systems. Some important fixes moved 1.0.0 to 1.0.9 in the early months of 1994, but that system development path has been unchanged since mid-year. By contrast, 1.1.0 underwent over 50 changes in the first 10 months.
Plans are now afoot for the next major release. Again, the stable and well-tested features of the experimental versions (up past 1.1.60) will be incorporated in a production release called 1.2.0. Its twin, version 1.3.0, will be the basis of yet more experimental work on the kernel.
One thing that also happened with the Great Release was the release itself no longer catalogued its changes. This shortcoming was alleviated when Russell Nelson, email@example.com volunteered to distribute a change summary shortly after each patch was distributed.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide