The Puzzle of 3-D Graphics on Linux

New opportunities for game developers are provided by several tools.
Mesa: Open Source OpenGL

Mesa is an unlicensed implementation of the OpenGL standard that is available on several platforms, including Linux, Windows and Macintosh. Originally started in 1995, it has been under development ever since by a team of developers led by Brian Paul. Mesa is freely available and open source, so anyone can work with the source code and port it to another operating system. For the most part, OpenGL applications can compile and run against a Mesa library just as they would against a licensed OpenGL library.

Provisions for hardware acceleration are incorporated into Mesa. For example, owners of a 3dfx card may choose to download and install the Glide SDK from 3dfx and then recompile Mesa from its source code. When Mesa configures itself for compilation, it should detect the installed Glide headers and libraries and consequently add the necessary code to allow the 3dfx card to accelerate many of the OpenGL functions (via Glide 2.x).

Like OpenGL, Mesa has undergone several revisions. As of Mesa 2.x, the OpenGL 1.1 standard has been supported. The later Mesa 3.x library is an implementation of the OpenGL 1.2 standard, and thus should be nearly as current as OpenGL itself. Mesa also includes support for GLUT and GLU.

So now we have OpenGL, a programming interface for creating 3-D graphics, and an open-source implementation called Mesa. The next part of the puzzle is the glue that joins OpenGL and the X Window system.

GLX: Using OpenGL with X Windows

As OpenGL is platform- and system-independent, it is also window-system-independent. Thus, it needs a window system binding to allow it to interact with the window system. This binding provides the functionality for actions like finding the location of a window on the screen or how to process input. In the case of UNIX and Linux systems, that's GLX, a library that allows OpenGL and X to operate together. (In the case of Microsoft Windows, it's called WGL.) That is, with GLX, it is possible to have OpenGL utilize an X window for its output. Even when you're using Mesa (full-screen or non-DRI; we'll explain DRI in just a moment), there's a fake GLX implementation to make the system think it is running under the normal window system binding. The GLX currently used in Linux is based on the source code released by SGI in February, 1999.

Some of you might even have run across the term Utah-GLX. What is Utah-GLX, and what part does it play in this puzzle?

Utah-GLX

Utah-GLX is a project to add OpenGL capabilities to some current video accelerators, like the Matrox G400/G200 cards and the ATI Rage Pro and Rage 128 cards, while still using XFree86 3.3.x. The Utah-GLX driver uses indirect rendering (and in some cases, a form of direct rendering) to provide this kind of acceleration. However, as will be explained shortly, this rendering incurs a performance penalty that prevents the hardware from reaching its full potential.

Among other accomplishments, the Utah-GLX project has led to the first hardware acceleration on the Linux PPC platform as well as the first hardware acceleration for a laptop. In both of these cases, the video card is the ATI Rage Pro.

Now that XFree86 4.0 has been released, the hope is that much of the work in the Utah-GLX project can be ported to the DRI. While there has been some talk of starting this move, as of this writing it hasn't happened yet.

This talk of indirect and direct rendering naturally leaves some unanswered questions. Let's take a closer look and see how the two differ and where they are used.

Indirect vs. Direct Rendering

The difference between indirect and direct rendering is the number of levels through which the data must pass. That is, how much the data must be massaged before it is actually put in the frame buffer of the video card to be displayed on a monitor. As one would expect, the fewer the levels, the faster the image, and thus the emphasis on direct rendering (via DRI) as part of XFree86 4.0.

When indirect rendering is used, data is copied from the application issuing the graphics output to the X server, and from there to the hardware. This incurs a performance penalty, since the output from the application must be packaged into a form for the X server and then, once X has done its job, the final output must be packaged appropriately and sent to the hardware. In a normal 2-D application, the speed of this process is adequate. However, in today's CPU and memory-intensive 3-D applications, that overhead is too unwieldy for adequate performance.

Direct rendering attempts to streamline this flow of data and allows the application to access the hardware more directly. That is, it allows an application to issue its drawing commands directly to the graphics hardware, with only the minimum amount of necessary intervention by the X server. This ability exists in XFree86 as the Direct Rendering Infrastructure (DRI), developed by Precision Insight.

Finally, we get to talk about this mysterious DRI that has come up in our discussion a couple of times. So without further ado, read on to learn more about DRI and what it means for Linux users.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState