The new, updated v2.1 of Fixstars' Y-HPC for Sony PlayStation 3, dubbed by the company as the world's only commercial, cross-architecture cluster construction suite, is now available. This release's key improvement is the addition of the next generation of ps3vram for fast, temporary file storage or swap using PS3 video RAM. This version of ps3vram, says Fixstars, is up to 50% faster than prior versions and is automatically enabled as swap. Also included are the new features found in Yellow Dog Enterprise Linux v6.1, such as updated kernel v2.6.28, IBM Cell SDK v220.127.116.11, improved ps3vram support and Libfreevec. Fixstars says that the monumental improvements in compute performance from Y-HPC v2.1 will allow existing and new PlayStation 3 clusters to tackle problems never before believed to be practical.
Please send information about releases of Linux-related products to firstname.lastname@example.org or New Products c/o Linux Journal, PO Box 980985, Houston, TX 77098. Submissions are edited for length and content.
James Gray is Products Editor for Linux Journal
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?