Inteview with Miguel De Icaza
Miguel De Icaza is the creator of the GNOME desktop environment. Below, Aleksey Dolya interviews Miguel about the process of creating GNOME and what he's up to these days.
Aleksey Dolya: Miguel, could you tell us a little about yourself?
Miguel de Icaza: I currently work for Ximian, a software company that my friend Nat and I started in 1999. Ximian has been an extremely fun adventure, and and we are glad that we did it. I was raised in Mexico, both my parents were scientists (my father a physicist and my mother a biologist). Mexico is a third-world country, so open-source basically was a great educational tool for me. Otherwise, I never would have got access to this high level technology. Also, my support group in Mexico helped me to become a better programmer.
AD: The word de in your surname points to your noble roots, doesn't it?
MdI: Which is a shame in human history. Class differences are wrong.
AD: Could tell us about the idea behind creating GNOME?
B>MdI: We wanted to make Linux succeed on the desktop. It was and is fairly successful as a server platform, and it really shines there. But, we did not want to see free software limited to the server market. We wanted end users to be able to get this system and have the same freedoms and potential that those using Linux for a server operating system were enjoying.
KDE was an inspirational project, but at the time, the Qt toolkit on which KDE was built was a proprietary toolkit. It was a disgrace that everyone in the community had worked so hard to create a fully open-source [desktop] (a legacy to all humanity) that we would give up in the end because of the lack of a free toolkit. So GNOME was started to make sure that we had a fully free system, and it was based on the most advanced C toolkit available at the time: the toolkit built by Peter Mathis and Spencer Kimball for their most excellent GIMP imaging software. Part of the background to create free software, and ensuring that things were fully free, came from my involvement with Linux on the SPARC, which I helped develop. Around that time the x86 was not very fast, and we were running Linux/SPARC, Linux/ALPHA and Linux/SGI. But the downside was that proprietary applications coming out for Linux on the x86 just did not run on our higher end computers. And there was not much that could be done about it, as we did not have the source code.
AD: What tools did you use to create GNOME? How was that process?
MdI: C Compilers, editors, debuggers. The usual: people in the GNOME community built a number of tools for debugging and improving the system, like Owen Taylor's grandiose memprof tool.
AD: How do find today's popularity of GNOME?
MdI: I am very pleased with it.
AD: Could you compare KDE and GNOME? At first glance, they seem different only in interfaces and themes, but the functionality is the same.
MdI: I do not run KDE, so it is hard for me to compare. People tell me that GNOME is faster and uses less memory, but that is just what people report.
AD: GNOME is an industry standard. How did it happen, and why?
MdI: GNOME and another project, called Harmony, in a way made Trolltech revisit the license of Qt. And Qt became free software, but with a twist: the libraries were GPL libraries. Hence, it was possible to create free software with it (under the GPL), but anything else required a commercial license for Qt. Many vendors have a problem with shipping a system that requires a royalty to be paid on a per-developer basis. That is why GNOME is interesting: our libraries for creating applications are royalty-free. Also, I think the GNOME community is fascinating and has created some great software and some great extensions. A big influence on GNOME, I feel, was the artists and people who had a general inclination for computer graphics (coming from The GIMP background), so this made for a pleasant platform. I think that both GNOME and KDE have been able to innovate in many areas. Today, we try to cooperate on some fronts; on other [fronts], we both play catch-up to the latest feature added by the other team.
AD: Do you know Matthias Ettrich, the KDE creator? What kind of relationship do you have with him?
MdI: Yes, I have met him a few times. But, we have not really spoken too much.
AD: Where do you work now, and what are you doing?
MdI: I work at Ximian, but my focus has changed from doing GNOME development to working on a project called Mono. Mono is an open-source implementation of the .NET Framework, a development platform that I really like. A lot of the effort that has gone into Mono has been put there mainly to help GNOME become a better platform--my merging these two worlds. And, those of us working on the project would love to see more Mono-based desktop applications out there.
AD: What are you future plans?
B>AD: Why do you like the .NET Framework? Do you really think that new platform will be useful for UNIX OSes?
MdI: This can be better answered by this old post.
B>AD: Will Mono consist of only the .NET Framework, or you are working on the development tools too?
MdI: Some development tools, but no IDE. We suggest that people who are interested in an IDE use SharpDevelop from Eclipse.
AD: What is your favorite Linux distribution?
MdI: I have no favorite distribution.
Aleksey Dolya is a Russian C/C++ programmer interested in network security and software protection.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- The Secret Password Is...
- RSS Feeds
- New Products
4 hours 4 min ago
- Keeping track of IP address
5 hours 55 min ago
- Roll your own dynamic dns
11 hours 9 min ago
- Please correct the URL for Salt Stack's web site
14 hours 20 min ago
- Android is Linux -- why no better inter-operation
16 hours 36 min ago
- Connecting Android device to desktop Linux via USB
17 hours 4 min ago
- Find new cell phone and tablet pc
18 hours 2 min ago
19 hours 31 min ago
- Automatically updating Guest Additions
20 hours 40 min ago
- I like your topic on android
21 hours 26 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?