Wolfram Research's new gridMathematica 7 enables users to utilize the built-in parallelization capabilities of its Mathematica application and, thus, run more tasks in parallel on more powerful hardware and clusters. gridMathematica adds extra computation kernels and automated network distribution tools, allowing users to achieve faster execution “without changing a line of code”, says Wolfram. Three different products are part of the series: gridMathematica Local, gridMathematica Server and Wolfram Lightweight Grid Manager. gridMathematica requires Mathematica and is available for Linux, Mac OS X and Windows.
More so than nearly all its rivals, IBM has made “going green” a core mission. Not only has IBM rolled out its “Big Green” and “Big Green Linux” initiatives, but it also has now published one of the few books on green IT, called The Greening of IT: How Companies Can Make a Difference for the Environment. In the book, IBM senior staffer, John Lamb, tackles both macro and micro issues surrounding the reduction of the environmental impact caused by IT operations. At the macro scale, Lamb looks at the role of governments and electrical utilities and the importance of good regulations and incentives. At the micro level, Lamb examines the nuts and bolts of reducing energy consumption in the data center, covering organizational issues, ROI, procurement, asset disposal, measurement of energy consumption, virtualization, cooling equipment and much more. Finally, the author explores case studies of all types and sizes worldwide, including IBM's own $1 billion Big Green initiative.
The crew at Super Talent has been busy preparing not one but two new families of solid-state drives (SSDs), the UltraDrive ME and UltraDrive LE. The company calls the lines “next-generation SSDs” that offer “noticeable performance gains at boot time, application loading and accessing data”. Although both lines offer 32GB, 64GB and 128GB variants, the UltraDrive ME line offers an additional 256GB model. The UltraDrive LE is rated for a maximum sequential read speed of 230MB/s, while the UltraDrive ME comes in at 200MB/s. Regarding maximum sequential write speed, the UltraDrive LE clocks 170MB/s, and the UltraDrive ME at 160MB/s. Super Talent says that the drives are designed to be “compatible with all known operating systems”, including Linux, DOS and Windows.
Making the area of virtualization even more interesting is ScaleMP's updated Versatile SMP (vSMP) Foundation 2.0 virtualization solution. vSMP Foundation aggregates multiple industry-standard off-the-shelf x86 servers (rackmounted or blade systems) into one single virtual high-end system for the HPC market. This new release of vSMP, says ScaleMP, offers “significantly enhanced performance” through support for the forthcoming Intel Nehalem processor family, as well as enhanced enterprise-class features, such as increased high-availability, partitioning of a single virtual system into multiple isolated environments, extended remote management, enhanced profiling capabilities and support for Emulex LightPulse Fibre Channel HBAs.
Compiere ERP—a comprehensive open-source application that automates business processes, such as accounting, purchasing, order fulfillment, manufacturing, warehousing and CRM—is now available on the Amazon Elastic Compute Cloud (EC2). The new Compiere Cloud Edition is delivered with a complete technology stack—that is, an operating system, application server and database that can be deployed on Amazon EC2 “in a matter of minutes”. Compiere says that the “convenient virtual computing environment” reduces the cost of ERP deployment by eliminating up-front capital costs for hardware and software and reducing ongoing IT infrastructure support costs. The company also notes the advantages of cloud computing, which allows IT departments to increase capacity or add capabilities “on the fly” without investing in new hardware, personnel or software by accessing virtual servers available over the Internet to handle computing needs. A range of subscriptions include application support, service packs and access to Compiere automated upgrade tools.
James Gray is Products Editor for Linux Journal
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
25 min 31 sec ago
- Reply to comment | Linux Journal
4 hours 25 min ago
- Yeah, user namespaces are
5 hours 41 min ago
- Cari Uang
9 hours 12 min ago
- user namespaces
12 hours 6 min ago
12 hours 32 min ago
- One advantage with VMs
15 hours 43 sec ago
- about info
15 hours 33 min ago
15 hours 34 min ago
15 hours 35 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?