Unix Wars, Redux
For years, Unix vendors fought each other for market share by each implementing their own proprietary extensions to Unix, each trying to use their own extensions to sell their version of Unix—usually on their own hardware. No one blames them for competing, but the result was not good for the whole Unix marketplace. Many slightly different products, which had different names but somehow were all understood to be Unix, more or less, bewildered and bothered consumers.
Because they recognized the damage this situation could cause, Unix vendors have been making interesting noises for years about closer co-operation with each other. Novell bought Unix, then gave the Unix trademark to X/Open, to create an open branding process in order to help bring the Unix world together. X/Open then released the Single Unix Specification, to which all vendors' versions of Unix are required to conform in order to use the trademark. To an amazing extent, this has been a success; quite a few products that could not use the trademark before have now been “branded”.
Last fall, at Unix Expo, HP and SCO announced that they were working together to buy Unix development rights from Novell, and that they were going to develop the new standard 64-bit Unix. While several versions of Unix have been more or less 64-bit in the past, the Single Unix Specification (SUS) does not explicitly address 64-bit issues, and SCO and HP (especially HP) were going to supply the Unix world with 64-bit Unix.
This was expected to provide the Unix world with, essentially, 64-bit additions to the SUS, which would be implemented in every 64-bit version of Unix.
The careful reader will have already noted my use of the past tense. Welcome to reality in the Unix world. SCO and HP recently made it quite clear that they intend for their extensions and additions to be available only from SCO and HP. They will provide the whole Unix world with a unified 64-bit Unix, all right, provided that everyone uses their operating system on their hardware. Welcome back to market fragmentation—called “product differentiation” by the spin doctors.
No, this isn't just a tale of woe. The Linux community used to be unified because of a simple lack of need for competition. Now, Linux distributions are somewhat differentiated, but most Linux vendors work together, recognizing that their long-term chances of survival are far better working together than fighting. Unlike the Unix community, the Linux community has stayed generally unified in the face of commercial interest.
Linux, though it started out as a toy, has for some time been real competition for “Real Unix”. Ignoring the licensing issues for the moment, Linux looks a lot like a vendor-differentiated version of Unix. While a few Unix versions have extra features that Linux does not (yet), such as journaling file systems, process migration, and fail-over server capability (See Huh?), Linux has features that distinguish it as well. For example, Linux has high-quality networking with support for many protocols; few commercial versions of Unix can provide Novell, Appletalk, SMB, and AX.25 in addition to standard TCP/IP networking.
Linux also uses memory frugally; with Linux, it is perfectly reasonable to use a machine with only 4MB of RAM—with most versions of Unix, that's not even enough to boot, let alone do useful work. Linux has more complete hardware support, especially for legacy hardware, than most (all?) versions of Unix for Intel x86 computers. Linux distributions usually include far more application software than is generally included in a Unix distribution. And, last but not least, Linux comes with source code.
Most versions of Unix support one, or at most two different CPU architectures. Sun's Solaris supports SPARC and Intel x86. SGI's Irix supports MIPS. SCO supports Intel x86. Digital Unix supports the Alpha. Linux currently supports Intel x86, Alpha, SPARC, Motorola 68K, PowerPC, MIPS, and Acorn ARM. The source code is now designed to make adding new architectures easy.
Linux is also now a 64-bit operating system. More properly, it is mostly bit-size-independent; it operates as a 32-bit operating system on a 32-bit CPU, a 64-bit operating system on a 64-bit CPU, and on a 16-bit CPU, the subset of Linux that can be fit into memory operates as a 16-bit operating system (see www.linux.org.uk/Linux8086.html).
Ignoring license issues, Linux and the various versions of Unix are pretty similar; they have the same core functionality, and each has a few extensions or features which differentiate it.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?