Product of the Day: SuperComputing 2004 Product Spotlight -- NASA's Columbia Altix Supercomputer
Linux at the forefront of Space Research with NASA.
One of the fastest Supercomputers was unveiled at Supercomputing 2004 in Pittsburgh last week. This is an exciting space to watch Linux grow. Everywhere you looked Linux was being used in various hardware configurations. The show rocked as it was a marriage of supercomputing solution providers and University research facilities to show what is being done with this computing power. You got to mingle with some scientists from all over the world and take a peek at their projects. Japan's 5,120-processor Earth Simulator was pretty cool, there was lots to see.
The Columbia Supercomputer named to honor the crew of the Space Shuttle Columbia lost Feb 1, 2003 is an integrated cluster of 20 interconnected SGI Altix 512-processor systems. There are 10,240 Intel Itanium 2 processors and delivers sustained performance of 51.87 trillion calculations per second (teraflops) and peaks at 60.96 teraflops. This supercomputer is now ranked the second fastest on the TOP 500 list. The only one that is faster is Blue Gene, IBM's supercomputer at the USA Department of Energy's Lawrence Livermore National Laboratory.
This Supercomputer built in 120 days allows NASA to improve computational power to solve immediate issues like "Return to Flight" for the space shuttle. Hydrogen Gas flow chambers simulations for example in the Space Shuttle Propulsion systems can now be done in days instead of weeks. Other projects for the Columbia will include earth modeling, space science and aerospace vehicle design. Simulations of the evolution of the Earth and planetary ecosystems with high fidelity has been beyond the reach of Earth scientists for decades," said Ghassem Asrar, Deputy Associate Administrator of NASA's Science Mission Directorate. "With Columbia, scientists are already seeing dramatic improvements in the fidelity of simulations in such areas as global ocean circulation, prediction of large scale structures in the universe, and the physics of supernova detonations." Calculations that use to take years can now be done in days.
The SGI Altix 3700 Supercomputer is a performance breakthrough with open source computing. The hardware combines the cost effectiveness of clusters with the scalable performance and big data capabilities of a supercomputer. It offers large global shared memory for demanding HPC applications with big data sets between cluster nodes. Data transmission speed round trip can be as low as 50 nanoseconds. Each node can scale up to 256 processors with 3TB of memory. The hardware runs on industry standard 64 bit Linux environment. This supercomputer performance rating is 42.7 trillion calculations per second sustained performance on 16 of 20 systems and a 88% efficiency rating on the LINPACK benchmark.
NASA has not been the only company to use SGI's hardware , it can also be found at the University of Rochester, Ford Motor Company and at the French energy giant Total. The Altix supercomputer is ideal for the computational needs of structural mechanics, fluid dynamics and chemical and material sciences that require massive data sets.
10,240 Intel Itanium 2 processors (512 processors each system)
20 terabytes of memory
80 SGI Altix IX-bricks
20 instances of Linux operating systems
440TB of SGI InfiniteStorage solution
An additional 800TB of data of existing data is managed and accessed through the storage network
80 dual-port 2Gb SGI FibreChannel HBAs
20 SGI 10Gb Ethernet cards
One 32 port Cisco 10GigE switch
120 Voltaire Infiniband HCAs
One 288-port InfiniBand switch
Four 24 port InfiniBand switches
Two 64-port Brocade switches
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide