Linux-GGI Project

by Andreas Beck
Introduction

In this article, we will explain the intentions and goals of the Linux-GGI Project along with the basic concepts used by the GGI programmers to allow fast, easy to use access to graphical services, hide hardware level issues from applications and introduce extensible support for multiple displays under Linux. The Linux-GGI project wants to set up a General Graphical Interface for Linux that will allow easy use of graphical hardware and input facilities under the Linux OS. Already existing solutions and standards like X or OpenGL do deal with graphic's issues, but these current implementations under Linux have several (sometimes serious) drawbacks:

  • Console switching is not deadlock-free, because the kernel asks a user-mode application to permit the switch causing a problem in terms of security. Since any user-mode application can lock the console, the kernel has to rely on the application to allow a user-invoked switch. For stand-alone machines, if the console locks in an application without a switch, a system reboot will have to be done.

  • The Secure Attention Key (SAK), which kills all processes associated to the current virtual console might help with the above problem, but for graphics applications the machine might still remain locked, because the kernel has no way to do a proper reset of the console—after all, it has no idea which video hardware is present.

  • Any application accessing graphical hardware at a low level has to be trusted as it needs to be run by root to gain access to the graphical hardware. The kernel relies on the application to restore the hardware state when a console switch is initiated. Relying on the application might be okay for an X server that needs superuser rights for other reasons, but most of us would not want to trust a game that is available to us only in binary form.

  • Input hardware (such as a mouse or a joystick) can be accessed using the current approach, but it can't easily be shared between several virtual consoles and the applications using it.

  • No clean way is available to use more than one keyboard and monitor combination. You might think that this is not possible on PC hardware anyway; but in fact, with currently existing hardware there are ways to have multi-headed PCs, and the USB peripheral bus to be introduced soon may allow for multiple keyboards, etc. Besides, other architectures do support multiple displays, and if Linux did also, it would be a good reason to use Linux for applications like CAD/CAE technology.

  • Games cannot use the existing hardware at maximum performance, because they either have to use X, which introduces a big overhead (from a game programmer's point of view), and/or access the hardware directly, which requires separate drivers for every type of hardware they run on.

GGI addresses all these points and several more in a clean and extensible way. (GGI does not wish to be a substitute for these existing standards nor does it want to implement its graphical services completely inside the kernel.) Now, let's have a look at the concepts of GGI—some of which have already been implemented and have shown their usability.

Video Hardware Driver

The GGI hardware driver consists of a kernel space module called Kernel Graphical Interface (KGI) and a user space library called libGGI. The KGI part of GGI will consist of a display manager that takes care of accessing multiple video cards and does MMU-supported page flipping on older hardware. This method allows for incredibly fast access to the frame buffer from user space whenever possible. (This technique has already been proven—the GO32 graphics library for DJPGG, the GNU-C-compiler for DOS, uses this method and has astonishingly fast graphical support on older hardware.) If this memory-mapped access method can be used in GGI, there will be no loss in performance as the application reads or writes the pixel buffer directly.

Each type of video card in the system has its own driver, a simple loadable module that registers as many displays as the card can address. (Video cards exist that support two monitors or a monitor and a TV screen.) The driver module gives the system the information needed to access the frame buffer and to access special accelerated features, the setup of a certain video mode and the limits of the hardware (e.g., the graphic card, the monitor, and any other part of the display system). The module can either be obtained from a single source file or be linked using precompiled subdrivers for each graphical hardware subsystem (ramdac, monitor, clock chip, chipset, accelerator engine). This last option is the favourite approach, since it allows support for new cards to be added quite easily, as only the subdrivers for hardware not already supported need to be implemented and tested. (The others are already in use or bug fixes there will improve all drivers using them.) This scheme has been used to derive support for many of the S3 accelerator-based cards, and has proved to be very efficient and easy to use. It also allows for efficient simultaneous development for several graphic cards. The subdrivers to be linked together are now selected at configuration time, but they can also be selected after automatic detection or according to a database (not yet built). Note that the subdrivers do not need to be in source form; as a result, precompiled subdriver object files can be linked together during installation.

As each subdriver knows the hardware exactly, it can prevent the display hardware from being damaged due to a bad configuration and make suggestions about the optimal use of the hardware. For example, the current implementation has drivers for fixed- and multisync monitors that allow optimal timings for any resolution to be calculated on the fly without any further configuration. Of course, completely user- configurable drivers are also possible. In short, in addition to the hardware level code, the subsystem drivers provide as much information about the hardware as possible. This way the kernel will have sufficient methods to initialize the card, to reset consoles and video modes when an application gets terminated, and to make optimal use of the hardware. The KGI manager will allow a single kernel image to support GGI on all hardware, as any hardware-specific code is in the loadable module and only common services (such as memory mapping code) are provided from the kernel. The KGI manager will also provide data structures and support to almost any imaginable kind of input devices.

The user space library, called libGGI, will implement an abstract programming interface to applications. It interfaces to the kernel part using special device files and standard file operations. Applications should use this interface (or APIs provided by applications based on it) to gain maximum performance; however, other APIs can be built accessing the special files directly. Understand that in this case the X server will just be a normal application in terms of graphic access. Since X is considered to be the main customer for graphical services, the API will be designed according to the X protocol definition and will implement a set of low level drawing routines required by X servers. The library will use accelerated functions whenever possible and emulate features not efficiently supported by the hardware found. An important feature of future generation graphical hardware is 3D acceleration which easily fits into the GGI point of view. We plan to provide support for 3D features based on MESA, which is close to OpenGL and ensures compatibility with other platforms than Linux.

Another issue when dealing with graphics is game programming as games need the highest possible performance. They also need special support by the video hardware to produce flicker-free animation or realistic images. The current approaches can't support this need in a reasonable way, since they cannot get help from the kernel (e.g., to use retrace interrupts). GGI can provide this support easily and give maximum hardware support to all applications.

Input Hardware Driver

There are many ways to interact with computers—a keyboard, a mouse, even a cybersuit. All of these methods have special protocols to report user actions and even need special hardware to be accessed. GGI will allow any kind of input to be supported without recompiling the kernel for each new device, thus allowing for flexibility and easy configuration. This support is achieved by having a loadable module for each device or device class. Just like the video card drivers, any input device driver will register abstract input devices that convert user actions to events.

For example, an application might query for the registered devices and select the events it wants to receive, so that a game program could default to use joysticks or keyboard input depending on the environment. Installing a game or an X server will not require any further configuration other than copying the binary to its destination directory and starting it. Please note that this methodology will also considerably reduce the effort required to maintain several differently-equipped machines as the application binaries will be the same for all machines and can be shared via network file systems. Only the GGI modules to be loaded will differ from machine to machine.

A New Way of Understanding Consoles

GGI defines a console as a pair—a display and a (mandatory) character input device. Optionally, other input facilities like a pointing device or controllers attached to a console may be present. The display is capable of providing alphanumeric data or graphics while the character device provides character input (just as the name implies). We use these diffuse terms as the display actually might be something other than a monitor, e.g., braille lines or other devices that help disabled or handicapped people to work with computers. Similarly, the input might be a keyboard or a voice recognition program or hardware—just about anything you can imagine. However, the character input device is mandatory, because it focuses on one and only one virtual console that is shown on one of the displays registered by the loaded modules. Any other devices are associated with one of the keyboards, and any user activity is reported to applications running on the focused console. Thus, it is not only possible to have multiple virtual consoles but also (in conjunction with multiple displays) to have several real consoles.

If the user wants to switch between two virtual consoles, the keyboard driver will tell the KGI manager to bring the specified virtual console on the display assigned to it and then report any keyboard, pointer and controller events to the application. One problem arising from the virtualization is that an application accessing accelerated features might first have to terminate the current command or that the frame buffer needs to be preserved even if the application goes into background mode. GGI will effectively hide this operation from the application. Applications can be placed into one of the following categories with examples given:

  • The application can redraw its screen without noticeable overhead at any given time, e.g., X server.

  • The application can be programmed considerably more easily when a back-up buffer is provided in case the frame buffer needs to be accessible at any time, e.g., a ray tracer or any other program that needs to do a lot of calculations to draw an image. This back-up would also allow running the application in background mode while continuing to draw to its frame buffer.

  • The application can skip output or simply sleep, if not in foreground mode, thereby reducing system load significantly, e.g., games or software video decoders. SVGALIB works in this manner.

Class one and three are easy to virtualize—they just have to redraw their buffers when switched to foreground mode, and therefore, when switching to background mode, the screen contents are discarded and drawing requests are ignored. The only difficult class is class two. However, since the kernel knows the exact state of the hardware, it can tell a user space daemon to allocate sufficient memory, save the frame buffer there, redirect the memory mapping of the application and tell the library to use optimized drawing methods for memory- mapped buffers instead of accelerated drawing functions.

GGI plans to add powerful graphical hardware support to the Linux operating system. As with any hardware driver, it needs to have a kernel segment that is kept to a minimum (currently the modules are about 30K in size, and should not become greater than 80K). If accepted by the Linux community, GGI can provide a clean method of dealing with multiple display and input hardware as well as an architecture-independent programming interface that will give good performance on any platform. Also, it will allow hardware manufacturers to provide optimized drivers for their hardware if they wish. During development much care has been and will continue to be taken to isolate machine or hardware-dependent code, whenever possible, in order to provide good portability.

Sidebar: Linux-GGI Project Resources

As GGI is still under development, several features are not yet implemented, but there is a first implementation that demonstrates that our concepts are capable of providing easy access to video hardware and solving all of the points addressed in this article. Currently being worked on is support for multiple displays and libGGI. Of course, introducing a new concept to the kernel to access video hardware will cause several (non-X) applications to be incompatible, but on the other hand, adding this concept will ease the configuration of Linux, and open up new vistas to game programmers with an operating system and graphical support that will allow maximum performance on any system.

Andreas Beck (becka@sunserver1.rz.uni-duesseldorf.de) studies physics at the University of Duesseldorf, Germany and started the GGI project. He developed the memory mapping code for GGI, worked on the library implementation and made major contributions to the concepts used.

Steffen Seeger (seeger@physik.tu-chemnitz.de) also studies physics at the University of Technology at Chemnitz-Zwickau. He wrote most of the S3 driver code and made major contributions to the console concepts and the kernel drivers.

Load Disqus comments

Firstwave Cloud