Fd.o: Building the Desktop in the Right Places

Don't be fooled by chatter about desktop wars. Applications and desktop environments are cooperating behind the scenes and using reality-tested standards to make everyone's software work and play well together.
Cairo

Vector graphics create an image by drawing more or less complex lines and filling in the resulting areas with colors. The corresponding files are small in size and can be scaled at any resolution without losses. Consequently, this technique is interesting for everybody who wants to be sure that what they print is what they see. Unfortunately, X knows how to manage screen pixmaps of text, rectangles and such, but it simply ignores printing or vector graphics. This is one of the reasons why we still do not have 100% consistency between screen, paper and saved files.

The FD.o solution is Cairo, “a new 2D vector graphics library with cross-device output support”. In plain English, this means the result is the same on all output media. Externally, Cairo provides user-level APIs similar to the PDF 1.4 imaging model.

Through different back ends, Cairo can support different output devices. The first one is screens, through either Xlibs or XCB, and in-memory image buffers, which then can be saved to a file or passed to other applications. PostScript and PNG output already is possible, and PDF is planned. OpenGL accelerated output also will be available through a back end called Glitz. In addition, it will be possible to use Glitz as a standalone layer above OpenGL. Cairo language bindings exist for C++, Java, Python, Ruby and GTK+.

The developers of OpenOffice.org also are planning to use Cairo after version 2.0 of the OOo suite is released, possibly even as a separately downloadable graphics plugin. Still being in active development and minus a completely stable API, Cairo is not included yet in official FD.o platform releases.

D-BUS

D-BUS is a binary protocol for Inter Process Communication (IPC) among the applications of one desktop session or between that session and the operating system. The second case corresponds to dynamic interactions with the user whenever new hardware or software is added to the computer. The internals of D-BUS were discussed by Robert Love in “Get on the D-BUS” in the February 2005 issue of Linux Journal. As far as the desktop end user is concerned, D-BUS should provide at least the same level of service currently available in both GNOME and KDE. To achieve this, both a system dæmon called message bus and a per-user, per session dæmon are available. It also is possible for any two programs to communicate directly by using D-BUS, to maximize performance. For the same reason, because the programs using the same D-BUS almost always live inside the same host, a binary format is used instead of plain XML.

The message bus dæmon is an executable acting like a router. By passing messages instead of byte streams among applications, the dæmon makes their services available to the desktop. Normally there are multiple independent instances of this dæmon on each computer. One would be used for system-level communications, with heavy security restrictions on what messages it can accept. The others would be created for each user session, to serve applications inside it. The systemwide instance of D-BUS can become a security hole: services running as root must be able to exchange information and events with user applications. For this reason, it is designed with limited privileges and runs in a chroot jail. D-BUS-specific security guidelines can be found on the Fd.o Web site (see on-line Resources).

Most programmers do not need to talk to the D-BUS protocol directly. There are wrapper libraries to use it in any desired framework or language. In this way, everybody is able to maintain his or her preferred environment rather than learning or switching to a new one specifically for IPC. End users, again, receive gains in interoperability: KDE, GNOME and Mono programs will be able to talk to one another, regardless of toolkit. As with Cairo, the first versions of the FD.o platform don't include D-BUS, because its API is not stabilized yet. But, the developers consider D-BUS to be a cornerstone of future releases. D-BUS also is meant to replace DCOP in KDE 4.

Is This the Right Solution?

Only time will tell if the first implementations of Fd.o are good enough and if the related specifications are valid. In this context, valid means something complete that can be re-implemented from scratch with totally new code, if one feels like doing so. I am convinced, however, that the approach is valid and has more potential than any other “complete desktop” already existing.

The two most frequent complaints I've read so far are 1) the current desktops would lose their identities, becoming “only user-interface stuff” and 2) FD.o is not standards, simply other implementations. My personal response to the first concern is, even if it happened, would it really be a problem? Most end users wouldn't even realize it, nor would they be concerned at all. They most likely would note the improvements I mentioned at the beginning and be done with it. Making sure that all applications can cooperate, no matter how they were developed, also would make Linux much more acceptable as a corporate desktop, shutting up a whole category of arguments in favor of proprietary solutions.

If protocols and formats stop being tied to specific implementations or toolkits, they can be shared across multiple “desktop environments”. Code stability and lightness would directly benefit from this, as would innovation. Completely new programs could interact immediately with existing ones. I therefore hope that this trend is generalized and that more application-independent standards are submitted to FD.o, covering file formats, sound schemes, color and tasks settings. Imagine one mail configuration file that could be used by any e-mail client, from Evolution to mutt, or one bookmark file usable by every browser from Mozilla to Lynx.

As far as the second objection goes—FD.o is not standards, simply other implementations—that's exactly how free software and RFC work. As long as specifications are written alongside the code, concepts can be validated in the field as soon as possible. For the record, this same thing currently is happening with OO.o and the OASIS Office standard (see LJ, April 2004). The file format started and matured inside StarOffice and OO.o, but now it has a life of its own. The committee currently includes representatives from KOffice, and any future office suite can use it as its native format, starting only from the specification.

Some traps do exist along this path, but as far as I can tell, the developers are aware of them and determined to avoid them. The first risk is to develop standards that for one reason or another work well only on Linux, leaving out the other UNIXes. The other is resource usage: all the magic described here would look much less attractive if it required doubling the RAM to run smoothly. As far as we know today, however, this seems to be an unlikely possibility. In any case, this is the right moment to join this effort. Happy hacking!

______________________

Articles about Digital Rights and more at http://stop.zona-m.net CV, talks and bio at http://mfioretti.com

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I think that the more

Matteo's picture

I think that the more interesting option hasn't yet been explored. Why do noy use an industrial standard API like OpenVG? There would be so many advantages, especially if accelerated via OpenGL, as shown here

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState