From the Editor - Desktop Success Is in the Details
Back in December 1995, Linux Journal reviewed a new Linux distribution called Caldera Network Desktop (CND) from a new company called, as you might remember, Caldera. Although CND made a big splash with a GUI environment and Novell client support, it didn't exactly get the corporate desktop migrated to Linux right away.
In the almost nine years since CND came out, more waves of Linux desktop releases have crashed against the rocks, then rolled back. But each wave has fixed important obstacles to putting Linux on everyone's computer.
The GNOME and KDE Projects, and the freedesktop.org interoperability effort that is making their software work together, are finally bringing user interface sanity to the X Window System. Ambitious contributions from companies, AOL to Ximian, have filled in big pieces, including a world-class browser, office suite and graphics libraries.
All that is keeping corporate users away from Linux now is the details. It's the little things in real-world IT environments that seem to make desktop Linux a “maybe-next-year” project. You might have a stubborn, difficult-to-port, in-house application written for a legacy, non-Linux, OS. You might be working with an embedded device or application service provider whose supposedly Web-based software has a bug that keeps it from working with the Web browsers available for Linux.
Successful Linux desktop plans depend on the details. The office suite is an anchor that keeps non-Linux desktops around. But it might be easier to switch than you think. Bruce Byfield breaks down this intimidating task on page 52.
If you think that integrating your own programs and scripts with office suite documents means you have to wait for some office suite vendor to release an upgrade, think again. James Britt shows how to write simple software that handles OpenOffice.org documents on page 78.
Even if you don't have a Linux desktop migration planned now, make sure not to make development choices that will cause migration problems later. Future-proof software is cross-platform, and you'll be able to upgrade from your legacy Mac OS systems to Linux seamlessly, if you develop now with Renaissance, which Ludovic Marcotte explains on page 58. You can even move scripts from any platform to Linux using Tcl/Tk, which Derek Fountain covers on page 83.
Moving to Linux doesn't mean users have to give up fun with photos and sounds. Learn about XMMS and a fun photo editing trick on pages 68 and 88. Last and most important, in most companies, if you can't sell management on it, you won't get it. Make the case for your desktop Linux migration in style, on Linux, with Rob Reilly's presentation advice for speakers on page 46.
Don Marti is editor in chief of Linux Journal.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?