The X Factor: Apple Rolls Out New Version of X11 Windowing Environment
When Apple launched its OS X development effort, it made a big deal out of the new OS's UNIX base. But for UNIX folks using the OS, certain components were missing or incomplete. For example, it lacked package management. So the Fink development team ported dpkg and apt-get from Debian; X on X implemented an X window system; X11 binaries for Darwin (Apple's open-source BSD-derived base code for OS X) were made available by the XFree86 project. But X11 still was hardly a strong suit for OS X.
Apple changed that this past Tuesday when it quietly announced X11 for Mac OS X, its own new open-source implementation of XFree86. Although Steve Jobs didn't make the announcement in his keynote (a press release carried the weight), Apple engineers at Macworld told me the new X11 is still a big deal: "This is one area where we had a lot of catching up to do."
Apple calls the new release "a complete, rootless X11R6.6 implementation, as well as display server and client libraries--plus headers in the SDK". The new implementation supports SSH tunneling and runs concurrently and seamlessly with other applications that use Apple's Aqua user interface. Content can be cut, copied and pasted between X and Aqua windows. It also takes advantage of Apple's Quartz graphics system.
While the obvious purpose of the move is to give Apple parity with other UNIXes, the more important purpose is to allow easier porting of X applications to OS X.
When I talked with Avie Tevanian, Apple's Senior VP of Software Engineering, he was enthusiastic about the project: "The majority will love the fact that, as open-source developers, they have the opportunity to take what we've done, tweak it, modify it, clear it up, whatever. And they now have a channel to get it out to millions of people. Of course, we're also hoping they'll port their applications to Cocoa" (Apple's OS X application development environment).
As with the company's Java, GCC and browser efforts, Tevanian said he wanted this version of X11 to be the best of its breed and thinks that status has been achieved. "The version that we have released we think is the best one out there", he said.
James Davidson (author of Learning Cocoa with Objective C and the original author of Apache Ant and Apache Tomcat) said, "They'll have to repeat what they did with GCC. In the NeXTStep days they took a snapshot of GCC and forked. Now they're trying not to repeat the experience of integrating everything back, by doing things the right way from the start."
Doc Searls is senior editor of Linux Journal.
Doc Searls is Senior Editor of Linux Journal
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?