Simulators for Training Firefighters

by Douglas Maxwell

According to the Federal Emergency Management Agency (FEMA), there was a 31% decrease in the incidence of structure fires throughout the United States between 1987–2001. A direct result of this reduction means less firefighting experience for our firefighters. As more-experienced firefighters retire, they are replaced by comparatively less-experienced personnel. This situation mandates optimal training techniques. Visualization and simulation technologies are maturing at a rapid pace and offer great potential to augment current training programs.

Using today's visualization technologies, a training simulator offers a realistic representation of real-world environments. Models of real-world facilities, buildings and areas can be rendered with great detail. In addition, the computational improvements made to PCs and laptops make them practical and inexpensive platforms to deploy. These training techniques also can be adopted easily by the current and next generations of firefighters who typically engage in video gaming for recreation. Due to the inherent dangers of training for fire emergencies, it is hoped that these emerging visualization technologies can augment current training techniques and better prepare firefighters for emergencies.

Models

The New York Fire Department (NYFD) recently built a multimillion-dollar extension to its training facility at Randall's Island (RI), near Manhattan. This facility is a one-block re-creation of typical architectures found in the five boroughs. Among the buildings are a brownstone, tenement, pizza shop and auto parts store. In September 2003, a team from the Naval Undersea Warfare Center (NUWC) was invited to tour this facility and photograph it so they later could create textures for the models being developed for the NYFD.

Simulators for Training Firefighters

Figure 1. The NYFD Training Facility at Randall's Island

Simulators for Training Firefighters

Figure 2. Model of the RI Facility, Kitchen Fire in a Pizza Shop

The NYFD's architectural contractor provided architectural plans for the buildings in the form of CAD drawings. The next step was to model the buildings from plans using Multigen Paradigm's Creator package. Lastly, the models were completed by texture mapping the models using the digital photos. This process took about four weeks.

This methodology can be applied to any architecture, from buildings to vehicles. Current plans for this technology include creating models of targets of strategic importance for use in advanced training of first responders. In addition, NUWC plans to apply this technology to the testing and evaluation of function of command and control centers for future naval architectures (Figure 3).

Simulators for Training Firefighters

Figure 3. A corridor aboard ex-USS Shadwell, a decommissioned US Navy ship used as a damage control research facility.

Software

The underlying software used to control the environment was a modification of work done for my thesis research while working at the Naval Research Laboratory. With the help of Rob King, we created the gestural interface needed to navigate through a true 3-D synthetic environment. Using this methodology, the navigation is accomplished by pointing in the direction the user wants to travel and pressing a button to move. The navigational algorithm is accelerative, which means the longer the user depresses the forward button, the faster the user moves through the environment.

The graphics were handled by the SGI Performer 3.0.1 scene graph for Linux. Performer is a well-established scene graph based on the OpenGL graphics libraries, and it offers Linux users solid performance. It was important to retain a wide variety of display options, because we wanted to be able to deploy a prototype system that could be used in both stereoscopic and monoscopic modes, as well as on multiple display devices.

Hardware

The software is demonstrated using a modified Dell Precision 340. The system is an Intel Pentium 4 processor running at 1.8GHz, with 512MB of RAM and three video cards. The video subsystem includes one NVIDIA NV20 (GeForce 3/64Mb) AGP video card and two NVIDIA NV17 (GeForce 4 MX 440/64Mb) PCI video cards. The system is running a stock installation of Red Hat 9.

The software also is demonstrated on a Sony VAIO laptop powered by an Intel Pentium 4 processor running at 2.6GHz, with 512MB of RAM and an ATI RADEON IGP card. The purpose of this platform is to demonstrate the flexibility of the graphics options.

In addition to the graphics and displays, we used a Polhemus magnetic tracking system in the gestural interface as an alternative to a mouse interface. A Logitech Wingman joystick was re-engineered for use as a gestural input device. Figure 4 shows the inside of the joystick. A tracking sensor is embedded in the grip of the joystick. Communication with the computer is achieved by remapping standard serial mice and wiring them to the buttons within the joystick. The computer interprets the joystick button presses as mouse clicks.

Simulators for Training Firefighters

Figure 4. A Logitech Wingman Joystick with Tracking Sensor Added

Prototype

As testing progresses, we hope these visualization and simulation technologies can help meet ever-expanding training needs for both military and civilian emergency response teams. Applications for this limited prototype include pretraining practice runs for rescue workers and navigational training for workers unfamiliar with the environment.

Future plans and upgrades to this software are a bit more ambitious. One use for the system is as a scenario-driven classroom/firehouse trainer. It will include instructor-steered or preconfigured scenarios in which the trainer reacts to trainee/student input and adjusts the scenario accordingly. This will be supported by physics-based modeling of fire/smoke/heat and so on. As with the prototype, future iterations of this software will be available in various display configurations, from large/multiple-screen classrooms to PCs and laptops.

In addition to the scenario-driven training, this software also is planned to be a post-exercise debriefing tool. The location of trainees in a training environment, such as Randall's Island or a burn building, can be tracked at any time during the training exercise. Trainers could replay the training event, showing participants' locations at given times. Planned features include split-screen displays that could show participants' individual viewpoints during the exercise as well as the bird's-eye view.

Acknowledgements

The author would like to thank Mr Jim Pollock of the Naval Undersea Warfare Center for funding this project, Dr Stephan Hitman of the NYFD for providing logistical support and resources and Dr Larry Rosenblum of the Naval Research Laboratory for permission to use the ex-USS Shadwell model.

Resources for this article: /article/7499.

Douglas Maxwell is a mechanical engineer and research scientist at the Naval Undersea Warfare Center. His areas of expertise include design synthesis in virtual environments and synthetic training applications. He lives with his wife and dachshund in Newport, Rhode Island.

Load Disqus comments

Firstwave Cloud