Simulators for Training Firefighters
According to the Federal Emergency Management Agency (FEMA), there was a 31% decrease in the incidence of structure fires throughout the United States between 1987–2001. A direct result of this reduction means less firefighting experience for our firefighters. As more-experienced firefighters retire, they are replaced by comparatively less-experienced personnel. This situation mandates optimal training techniques. Visualization and simulation technologies are maturing at a rapid pace and offer great potential to augment current training programs.
Using today's visualization technologies, a training simulator offers a realistic representation of real-world environments. Models of real-world facilities, buildings and areas can be rendered with great detail. In addition, the computational improvements made to PCs and laptops make them practical and inexpensive platforms to deploy. These training techniques also can be adopted easily by the current and next generations of firefighters who typically engage in video gaming for recreation. Due to the inherent dangers of training for fire emergencies, it is hoped that these emerging visualization technologies can augment current training techniques and better prepare firefighters for emergencies.
The New York Fire Department (NYFD) recently built a multimillion-dollar extension to its training facility at Randall's Island (RI), near Manhattan. This facility is a one-block re-creation of typical architectures found in the five boroughs. Among the buildings are a brownstone, tenement, pizza shop and auto parts store. In September 2003, a team from the Naval Undersea Warfare Center (NUWC) was invited to tour this facility and photograph it so they later could create textures for the models being developed for the NYFD.
The NYFD's architectural contractor provided architectural plans for the buildings in the form of CAD drawings. The next step was to model the buildings from plans using Multigen Paradigm's Creator package. Lastly, the models were completed by texture mapping the models using the digital photos. This process took about four weeks.
This methodology can be applied to any architecture, from buildings to vehicles. Current plans for this technology include creating models of targets of strategic importance for use in advanced training of first responders. In addition, NUWC plans to apply this technology to the testing and evaluation of function of command and control centers for future naval architectures (Figure 3).
The underlying software used to control the environment was a modification of work done for my thesis research while working at the Naval Research Laboratory. With the help of Rob King, we created the gestural interface needed to navigate through a true 3-D synthetic environment. Using this methodology, the navigation is accomplished by pointing in the direction the user wants to travel and pressing a button to move. The navigational algorithm is accelerative, which means the longer the user depresses the forward button, the faster the user moves through the environment.
The graphics were handled by the SGI Performer 3.0.1 scene graph for Linux. Performer is a well-established scene graph based on the OpenGL graphics libraries, and it offers Linux users solid performance. It was important to retain a wide variety of display options, because we wanted to be able to deploy a prototype system that could be used in both stereoscopic and monoscopic modes, as well as on multiple display devices.
The software is demonstrated using a modified Dell Precision 340. The system is an Intel Pentium 4 processor running at 1.8GHz, with 512MB of RAM and three video cards. The video subsystem includes one NVIDIA NV20 (GeForce 3/64Mb) AGP video card and two NVIDIA NV17 (GeForce 4 MX 440/64Mb) PCI video cards. The system is running a stock installation of Red Hat 9.
The software also is demonstrated on a Sony VAIO laptop powered by an Intel Pentium 4 processor running at 2.6GHz, with 512MB of RAM and an ATI RADEON IGP card. The purpose of this platform is to demonstrate the flexibility of the graphics options.
In addition to the graphics and displays, we used a Polhemus magnetic tracking system in the gestural interface as an alternative to a mouse interface. A Logitech Wingman joystick was re-engineered for use as a gestural input device. Figure 4 shows the inside of the joystick. A tracking sensor is embedded in the grip of the joystick. Communication with the computer is achieved by remapping standard serial mice and wiring them to the buttons within the joystick. The computer interprets the joystick button presses as mouse clicks.
|PasswordPing Ltd.'s Exposed Password and Credentials API Service||Apr 28, 2017|
|Graph Any Data with Cacti!||Apr 27, 2017|
|Be Kind, Buffer!||Apr 26, 2017|
|Preparing Data for Machine Learning||Apr 25, 2017|
|openHAB||Apr 24, 2017|
|Omesh Tickoo and Ravi Iyer's Making Sense of Sensors (Apress)||Apr 21, 2017|
- Graph Any Data with Cacti!
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- The Weather Outside Is Frightful (Or Is It?)
- Understanding Firewalld in Multi-Zone Configurations
- Simple Server Hardening
- Gordon H. Williams' Making Things Smart (Maker Media, Inc.)
- A Switch for Your RPi
- Server Technology's HDOT Alt-Phase Switched POPS PDU
- Buddy Platform Limited's Parse on Buddy Service