Under-Ice Sonar Visualization
In World War II, the Arctic became an active theater of operations for German and Soviet subs, which occasionally ducked under the ice to escape detection. After the war, interest in cold-water acoustics led to work on sonar and navigation instruments that could be applied to submarines operating in the Arctic. As a result of the Cold War and the corresponding growth of naval concern about the possibilities of nuclear warfare in the Arctic, an under-ice capability for nearly all US submarines was implemented.
With the appearance of the first nuclear submarine, USS Nautilus, the Arctic Ocean beneath the pack ice finally could be explored fully and used for naval operations. Today, under-ice operations are standard for submarines of the US and other nations. Furthermore, studies suggest that sonar could be used as a tool for detecting and localizing under-ice oil spills. So, for both strategic and environmental reasons, the study of under-ice sound properties is important.
For more than two decades, the Naval Undersea Warfare Center, located in Newport, Rhode Island, has been investigating and modeling the under-ice sound environment. Current work involves the use of 3-D visualization to aid in the understanding of complex scattering that results from the impact of sound energy on the ice blocks making up an underwater pressure ridge. These pressure ridges, called ice sails above the water's surface and ice keels below the surface, are formed when floating Arctic ice collides (Figure 1).
Current 3-D visualization work builds on a previous effort that was designed conceptually to show submarine commanders target location volumes created by the rendering of data from towed submarine sound sensors. Subsequently, this software has been modified and greatly enhanced to display environmental information in all parts of the world, including under the Arctic ice pack. The enhanced 3-D display is capable of immersive stereopsis viewing in multiple environments, including fixed shore-based facilities, such as a 3-D CAVETM, or on mobile systems, such as a laptop using a head mounted display (HMD, Figure 2).
It is anticipated that through the use of these high-level graphics techniques that both rapid target identification, be it tactical or environmental, and data prospecting will allow for a better understanding of the complex sound behavior in areas of interest to the Navy.
Although the original software was written to run under the Silicon Graphics IRIX operating system, at the time of this writing, the new Undersea Environmental Visualization (UEV) version is compatible with and has been tested and developed under Red Hat Linux 7.0 through 9.0. Linux was chosen as the operating system for several reasons. First, it is compatible with current and future submarine combat systems. Second, it is a generic UNIX operating system, which means software and script files developed under Linux can be transferred readily to UNIX operating systems such as HP-UX and IRIX. Third, it is an open-source operating system with a large user community that can be tapped for system optimization and maintenance.
The UEV system is composed of two main modules, the bezel and the main 3-D display application. These two modules communicate with each other by way of TCP/IP sockets. Figure 3 illustrates this architecture.
Separate modules were chosen for the display of the 2-D and 3-D data to allow separate viewing media to be used for each display, thus achieving the highest resolution for both. In its expanded form, the bezel also supports a 2-D overhead view. Still, this system is flexible enough to allow both displays to be shown simultaneously on a single screen, as shown in Figure 3. This simultaneous view does not support a 2-D overhead view, but it does support all the expanded version's functionality.
The bezel is a digital information and 3-D scene control program. The variables passed between the bezel and the main program include 3-D oceanographic/topographic maps, 3-D ice cover data, including ice keels, ice keel target strength data and 3-D sound propagation data, along with vehicle position data. The bezel for the UEV display was written using the XForms library. XForms is a GUI toolkit based on Xlib for the X Window System. It features a rich set of objects, such as buttons, scrollbars and menus, integrated into an easy and efficient object/event callback execution model that allows fast and easy construction of applications. In addition, the library is extensible and new objects easily can be created and added to it. Furthermore, Xforms was chosen for the prototype version of the UEV software because it is a stable and easy-to-use application programmers interface (API). In addition, absolutely no recoding is needed for operation under Linux.
Communication between the bezel and the main 3-D display happens by way of sockets that are established as datagrams in which messages sent over the network are self-contained packets delivered asynchronously between the two applications. This asynchronous form of communication was chosen because the data update rate between the two programs is slow enough that this primitive form of intra-program communication was sufficient. These links are primitive in their construction, requiring the specific IP address of the machines running the bezel and 3-D main application. Again, the reality, at least for research and development at Navy labs, is fast and inexpensive implementation is the driving force behind the creation of prototype software. This is so because software often doesn't advance past the prototype stage—the cost associated with programming elegance is a luxury.
However, a requirement for the follow-on UEV software is it must operate under Microsoft Windows as well as Linux. The Xlib version of XForms is no problem for Linux, but it is a big problem for Windows unless it is operated in the Cygwin environment. Although this is an option, the preference is for code that runs natively in both the Microsoft Visual C++ and Linux environments.
Our solution is the future conversion of the bezel to the Fast Light Tool Kit (FLTK), which will solve multiple problems. Because FLTK compiles under both Microsoft Visual C++ and Linux, the same software can be used for both systems. Second, the transfer of information between the bezel and main application can be converted from clunky TCP/IP sockets to a more elegant shared memory method. Finally, the bezel code can be brought into the 21st century by conversion of its XForms C routines to FLTK C++ methods. The conversion process currently is underway and is drawing in large part on the Open Inventor-based software that NUWC, Virginia Tech and the Naval Research Laboratory (NRL) jointly developed for the TALOSS Project. As the system evolves to rely more and more on 3-D interaction with the 3-D environment, the bezel controls will become less important and may disappear entirely. Most likely, they will be replaced by a virtual toolchest and a gestural-based interface.
The 3-D UEV display receives its mapping and navigational information from an under-ice canopy database that is loaded at startup and updated based on the evolution of the acoustic situation. The under-ice canopy database consists of an ice volume of uniform depth with one or more embedded ice keels. The area of acoustic coverage determines the extent of the ice canopy.
All under-ice acoustic information is pre-rendered as OpenGL Performer binary (pfb) files. Construction of the pfb files begins with using Matlab 7.0.1 on a Linux platform. Matlab is a flexible interactive tool for doing numerical computations with matrices and vectors but is capable of displaying this information graphically, in both 2-D and 3-D forms. Therefore, by using a combination of Matlab and C-based transformation code, the under-ice information that comes out of a FORTAN-based model, developed by G. Bishop, is messaged into a form that is compatible with the OpenGL Performer-based 3-D UEV display.
The transformation starts with a Matlab routine that calculates all polygonal surfaces and their normals. It then outputs this information to the C-coded routines that convert the information to pfb file format. The pfb conversion is a modification of the Silicon Graphics utility pfConvert that is available for both IRIX and Linux. The code snippets shown in Listing 1 were added into pfConvert.c to read in the polygonal information generated by the Matlab code. The pfConvert routine then uses its own libraries to output the data to a Performer pfb file. The 3-D main application combines all tactical, navigation and acoustic information into a comprehensive 3-D picture. It renders the picture using the computer platform-independent scenegraph, OpenGL Performer. The use of OpenGL Performer was necessitated by the need for an efficient and cost-effective means of displaying both planar and volumetric data within the same 3-D display. OpenGL Performer provided the least labor-intensive means of achieving this integration, although open-source freeware, such as OpenSceneGraph, could provide an inexpensive alternative to OpenGL Performer in future releases of the software.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
3 hours 46 min ago
- BASH script to log IPs on public web server
8 hours 13 min ago
11 hours 49 min ago
- Reply to comment | Linux Journal
12 hours 22 min ago
- All the articles you talked
14 hours 45 min ago
- All the articles you talked
14 hours 48 min ago
- All the articles you talked
14 hours 50 min ago
19 hours 14 min ago
- Keeping track of IP address
21 hours 5 min ago
- Roll your own dynamic dns
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?