Linux in Government: Navy Sonar Opens New Opportunities for Linux Clusters and IBM G5 servers
Lockheed Martin delivered a High Performance Computing (HPC) solution to the US Navy last year to run sonar systems in nuclear submarines. The solutions involved Apple Xserve systems using G4 processors and a Red Hat Linux-based operating system. While few people noticed the announcements made by Terra Soft, makers of Yellow Dog Linux, the event triggered ripples in the industry.
The Lockheed Martin Linux systems varied in two respects from the standard solution of the Apple Xserve. First, the solution did not use Apple's Mac OS X operating system. Secondly, Lockheed Martin built their own chassis and only used the internals of the Xserve. Lockheed Martin wanted the G4 PowerPC chips and Linux to provide a low heat, low power consumption solution. On a nuclear submarine, such features are essential.
In the past, the Navy relied heavily on older embedded solutions, which offered little ability to deploy software. The embedded systems, for example, could not adapt to Web Services that deliver geographic information (GIS) needed in the sonar process. A HPC Linux solution gave users the ability to adapt to various formats of data and encryption, which is critical to the timely delivery of data.
The PowerPC opened a whole new era, since it enabled engineers to use software, not hardware, for large computing jobs. Lockheed Martin's engineers discovered they could do more with a PowerPC with AltiVec than a traditional Digital Signal Processor (DSP), because it allows for an adaptive, flexible computing platform.
AltiVec(tm) is Motorola's trademark for the first PowerPC Single Instruction, Multiple Data (SIMD) extension. AltiVec was jointly developed by Motorola, IBM, and Apple. This same SIMD technology is called Velocity Engine by Apple. When IBM talks about this particular technology option they use VMX, the technology's original code name.
A SIMD system packs multiple data elements into a single register and performs the same calculation on all of them at the same time. In the Lockheed Martin solution, Terra Soft provided software engineering and support services. Modifications included device driver enhancements, kernel development, tuning firmware to allow serial port terminal control. They also aided in performance testing and helped with third party engineering and systems integration.
The integrated solution allowed Lockheed Martin to meet the requirements of the Navy's contract for sonar systems for nuclear submarines. The key to the solution involved a specific form-factor, processor density and Linux. Unfortunately, Red Hat does not offer a PowerPC port of their own software.
In addition to using Linux in for sonar in nuclear submarines, Lockheed Martin demonstrated a further commitment to Linux by awarding a contract to CSP for use in the Navy's advanced E-2C Hawkeye aircraft.
CSP Inc., based in Billerica, Massachusetts won the bid for the Hawkeye with their 2000 SERIES MultiComputer products. The MultiComputer division of CSP supplies high-performance Linux cluster systems for a defense applications, including radar, sonar and surveillance signal processing.
CSP features Linux HPC products such as their FastCluster server products. The company uses the Myrinet interconnect standard for MPI interprocessor communications, PowerPC processors with AltiVec technology and Yellow Dog Linux which they claim as the industry standard Operating System for PowerPC.
CSP also says that they use a full compliment of vectorizers, compilers, development tools and run-time performance libraries from the Linux community. Their solutions provide instant booting from a cold start, error-correcting memory, a fault tolerant MPI-like library, hot-swappable hardware, extended environmental specifications and built-in test.
The High Performance Computing (HPC) market remains a bright spot in the technology sector. This time last year, Intel-based platforms appeared to have the edge in the market. For example, Linux Networx was selected to build the cluster of 1408 dual-processor Opteron servers for Los Alamos Labs. However, most HPC wins went to HP and IBM came up short to companies like Linux Networx and Penguin Computing.
The Harvard Research Group wrote in HRG Assessment: HP High Performance Computing LC Series that:
The Linux cluster market in 2003 was more than 1/3 of the overall Linux server market in terms of revenue. HP dominated the worldwide Linux server market with about 29% of revenue market share and Linux servers provided over 25% of HP's HPTC market share of 37%. The worldwide Linux cluster market is expected to grow faster than the worldwide Linux server market over the next 2 -3 years as transition continues to take place from RISC/Unix technology to industry standard server and operating system technology. This growth will occur because HPC buyers are focused on price/performance, and Linux clusters have a 5x to 20x price/performance advantage over previous generation RISC/Unix platforms.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide