UNIX under the Desktop
When Steve Jobs introduced Apple's new iMac in January 2002, the spotlight was focused entirely on the physical architecture of the first mainstream computer that fully defied the term “box”. The new iMac is a white dome with a flat screen that floats on the end of a chrome arm. It looks like a cross between a Luxo lamp and a makeup mirror. Jobs called it “the best thing we've ever done”.
Coverage—including a TIME magazine cover story—was all about hardware. Nobody paid attention to Steve Jobs' slickest move of all, which is leveraging UNIX where it counts. Starting in January 2002, every new Mac will ship with OS X as its default operating system. OS X is built on Darwin, an open-source implementation of BSD on a Mach kernel. So now every new Mac is a Trojan horse that arrives with an invisible army of UNIX experts.
Regardless of the technical and religious differences that separate the many breeds of UNIX, expertise at one ports well to another: from Solaris to HP-UX to AIX to Linux to BSD to Darwin and OS X. If you want to hack, the environment is there—so are the tools and the community.
Put another way, OS X gives us the first popular desktop OS that fits into a prevailing Linux environment and also into the prevailing marketplace. On the bottom, it's UNIX. On the top, it runs Microsoft Office and the whole Adobe suite. This has its appeals.
In iDevGames.com, Aaron Hillegass writes:
Tomorrow I will get on a plane. I'll have my PowerBook with me. On that flight, I can write Cocoa apps, PHP-based web sites, Tomcat web applications, AppleScripts or Perl scripts. I can use Project Builder, Emacs or vi. I'll have my choice of MySQL or PostgreSQL to use as a back-end database. I'll use Apache as my web server. And it is all free! If I'm willing to spend a little cash, I can also run Word or Photoshop. I may even watch a DVD on the flight.
The social effects of OS X on the Open Source community were already apparent at the O'Reilly Open Source Convention in July 2001, when slab-like Macintosh G4 Titanium laptops seemed to be everywhere. At one Jabber meeting, four out of the seven attendees tapped away on TiBooks, including Jabber's creator, Jeremie Miller. Terminal windows were scattered across his screen. When we asked what he was doing, he replied, “compiling code while I catch up on some e-mail”.
The growing abundance of OS X fruit on the UNIX tree creates new and interesting market conditions for Linux, along with every other UNIX branch. There are sales projections for six million iMacs alone. Many of these machines will be penetrating markets where Linux has strong incumbent server positions, such as science and education. Lawrence Livermore National Laboratory was once Apple's biggest customer and might easily reclaim the title. In January 2002, the state of Maine announced its intent to give a new iBook to every teacher and student in the seventh and eighth grades. All those kids will have their own UNIX machines. Consider the implications.
Is there a server market for OS X? It's worth noting that OS X Server has existed as a product for more than two years and has never attracted much attention. Also, while every new OS X Mac is ready to perform a variety of server functions, that's not why it sells. IT Manager and Mac columnist John C. Welch calls OS X an “okay server, mostly due to hardware limitations and immaturity”. Meanwhile he says, “Linux is an excellent server. It runs on more and better hardware than Windows can ever dream of, thanks to IBM and Sun.” So OS X is no threat to Linux in the server space. And it's utterly absent from Linux's other home turf, embedded computing.
Where OS X will succeed is in the one category where Linux has struggled for popularity (if not functionality) from the start: on the desktop.
Is this a problem? That was the question at the top of our minds when we visited Macworld in January 2002. To our surprise, the answer was quite the opposite. Not only were plenty of familiar Linux figures walking around kicking tires (approvingly, it appeared), but there were UNIX geeks wearing Sun and SGI schwag as well. One Linux hacker told us OS X was “subversive” because it “seeds” the world with millions of open-source UNIX machines. Another said, “I can go to my Mom's, fire up her iMac, open a shell, ssh to my own server and get some real work done.” So the market logic of Linux and OS X appears to be AND, not OR.
Apple also has attracted some top talent from the open-source ranks. Brian Croll, who runs OS X engineering for Apple, was recruited from Eazel. Jordan Hubbard, the world's foremost BSD hacker (and a founder of FreeBSD), actually pitched his way into a job working on Darwin at Apple. After seeing OS X in preview form, he said “Hallelujah” and “This is what I've been waiting for the past 20 years....I never thought about working for Apple before, and now I was saying, How do I join?”
Working with the Open Source community is still new for Apple, and the relationship has been a challenge to the company's highly proprietary approach to intellectual property. But Apple has compromised on some issues. After hackers barfed on Apple's original public source license, the company issued a new one that the Open Source Initiative soon approved. Shortly after the new license was issued in January 2001, OS X product manager Chris Bourdon summarized it this way: “You can take Darwin and do anything you like. It's there for everybody.”
Doc Searls is Senior Editor of Linux Journal
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
1 hour 24 min ago
- Please correct the URL for Salt Stack's web site
4 hours 36 min ago
- Android is Linux -- why no better inter-operation
6 hours 51 min ago
- Connecting Android device to desktop Linux via USB
7 hours 20 min ago
- Find new cell phone and tablet pc
8 hours 18 min ago
9 hours 47 min ago
- Automatically updating Guest Additions
10 hours 55 min ago
- I like your topic on android
11 hours 42 min ago
- This is the easiest tutorial
18 hours 17 min ago
- Ahh, the Koolaid.
23 hours 56 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?