RSTA-MEP and the Linux Crewstation

Automatically detect the enemy in the dark and notify friendly units where he is.
Crewstation Application Software

This prototype crewstation comes from Raytheon's Tiger simulator legacy, unlike the embedded side, which is Raytheon's implementation of the US Army's Weapons Systems Technical Architecture Working Group (WSTAWG) Common Operating Environment (COE). Although both the embedded side and the crewstation side are message-passing systems, the two messaging systems are not compatible. A translation module goes between the two. To minimize latency and CPU usage, this translator process is split into two threads using the POSIX threads library. One thread waits for input from the embedded side of a socket and translates it into the shared memory pool used by the crewstation modules. The other thread takes data from the shared memory pool and writes it to the socket for the embedded side to pick up. Dividing this work into two threads and using full compiler optimizations keeps the latency to a minimum.

The video relay module reads a separate gigabit Ethernet network connection devoted to video. It decides in which video window the data is to be displayed and routes it there.

The GUI control panel on the crewstation was generated by Builder's Xcessory (see Resources). Three major concerns drove the design of the GUI: limited screen space, reflecting the state of the embedded side and desire to use a grip instead of a mouse or trackball.

The first major design issue encountered was screen real estate. One monitor was used to display all imagery, leaving only the bottom third of the screen available for the GUI. The mode of the system and the controls for that mode are displayed in this third. The system has two major modes: WAS mode and conventional framing mode. In WAS mode the sensor quickly scans a user-selected area, and the grips allow the user to pick a section of that scanned area to be displayed as a super field of view (SFOV). In framing mode, live video is displayed, and the grips allow the user to point the sensor. As access to framing controls is not needed during WAS and vice versa, both sets of widgets were designed to occupy the same screen space. This is the sensor mode pane of the GUI in Figures 3 and 4. When one set is available for use, the other is hidden. Other functionality, such as controls for the automatic target detection software, was placed in separate windows and made accessible from buttons on the main GUI. These windows pop up over the image display area.

Figure 3. Crewstation with System in Framing Mode

Figure 4. Crewstation with System in WAS Mode

This lack of screen space also presents a second problem. There is a need for immediate visual cues of system response to user input as well as a report of the current state of the embedded side. Rather than use separate widgets for control and status objects, the same widget is used for both. When the operator manipulates a widget, the operator's command is reflected automatically on the GUI, and the widget's callback code is triggered. The callback sends a message containing the requested change to the mode model. This request is passed to the embedded sensor side, which then returns a status. Should the status differ from the request, the mode model notifies the GUI, which in turn updates the widget to display the correct status value.

A third design issue is the need for a mouseless environment. Vehicle movement and lack of physical desktop space make it difficult to use a mouse, trackball or touchscreen. A keyboard is available but is used only for minimal data entry. For these reasons, we desire to manipulate the GUI with the hand grip.

Mouseless mode was accomplished in an early version of the GUI by adding manual widget traversals and button-press events. Moving the hat switch on the grip changed widget focus via XmProcessTraversal calls. Pressing the Select button on the grip defined and sent an XEvent, similar to this:


/* sending key press events */
#include <X11/keysym.h>

XKeyEvent ev;
Window    rootWin;
int       x,y;
int       root_x,root_y;
Window    win;

rootWin =  RootWindowOfScreen (guiScreen);
win = findPointerWindow(rootWin, &x, &y,
                        &root_x, &root_y);

ev.type = (long) KeyPress;
ev.send_event = True;
ev.display = display;
ev.window = win;
ev.root = rootWin;
ev.subwindow = 0;
ev.time = CurrentTime;
ev.x = 1;
ev.y = 1;
ev.x_root = 1;
ev.y_root = 1;
ev.state = 0;
ev.same_screen = True;

ev.keycode = XKeysymToKeycode(display,XK_space);
XSendEvent(display, window, True, KeyPressMask,
           (XEvent *)&ev);

Unlike the current version of the GUI, the previous version consisted of a single topLevelShell that contained only simple widgets, for example, PushButtons and ToggleButtons. The current GUI includes multiple shells (pop-up windows) and composite widgets, such as OptionMenus. Simply calling XmProcessTraversal to change focus does not work across shells. Sending a button press on an OptionMenu pops up the menu choices; however, sending a second button press does not select the option and does not pop down the menu.

The home audience should keep these facts in mind:

  1. The window manager is the boss. When dealing with multiple shells, remember that window managers do not readily relinquish control of the focus—or anything else for that matter.

  2. Widget hierarchy has an effect. The order of traversal in a group of widgets is determined partially by the order in which they were declared in your (or BX's) code.

  3. Be aware of behind-the-scenes code. Consider a RadioBox containing two ToggleButton children, with toggle A selected. When an incoming message to select toggle B is received, simply swapping the values of the children's XmNset resources looks correct on screen. The parent widget, however, still thinks toggle A is selected, which can lead to unexpected Button behavior.

In the current version of our project, a separate process takes input from the hand grip and controls the mouse pointer using a combination of XWarpPointer and the X server's XTest extensions (see Resources). The crewstation also displays video generated from data sent by the embedded side. The video relay process reads this from a socket and dispatches it to a window. There are four windows: framing video, WAS, SFOV and image chips.

As mentioned above, the framing video is live image data. The image in the WAS window is a compressed image strip that represents an image taken by a rapid scan of the sensor across a fixed area. Symbology in the WAS strip includes locations where the targeting systems think there might be targets. The SFOV is a larger view of a user-selected section of the WAS strip that shows more detail. Target symbology and information from the digital map are visible here. Image chips are segments of the scene where the targeting system finds something interesting. These are presented to the operator for evaluation and reporting to other systems that are off-vehicle. Figure 3 shows an outdoor view with the system in framing mode with the GUI, framing video and WAS strip. Figure 4 shows the system in WAS mode with the WAS strip, SFOV, GUI and an image chip window.

Video is implemented in OpenGL as a texture on a polygon, and the data from the video is put into an OpenGL texture and applied to the polygon. Then, when the polygon is drawn you see the image. (See the example code in Listing 1, available at ftp.linuxjournal.com/pub/lj/listings/issue114/6634.tgz, to illustrate the technique.) We chose OpenGL for video because it offers us a lot of options for processing and displaying the data. The image can be resized or rotated if the data is generated on a different orientation from how it's displayed. OpenGL has a lot of primitives for drawing symbology on top of the image, some image-processing ability built-in and double buffering for flicker-free updates. OpenGL is portable and well documented. Additionally, we can off-load a lot of the work from the CPU onto the graphics card hardware.

The SFOV selector controls what part of the WAS strip picture is selected for display on the SFOV window. It also controls where the red rectangle is drawn in the WAS strip window.

The crewstation has a separate control and moding module. Instead of having pieces of logic scattered throughout the modules in the system, they are concentrated in this one module. This design makes the other parts of the system simpler and more reusable. It also makes the moding module fiendishly complex. The mode model has to embody the knowledge of how the other pieces interact and mirror the state of the embedded system and the crewstation. It allows the crewstation to make permissible actions based on that state and monitors the embedded side for errors and unexpected state changes.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState