The geeks at Active Media Products weren't satisfied with the performance of CompactFlash cards in digital photography applications, so they made their own. The company's 600X Pro line of CF cards, which write up to 90MB per second, aims to free the memory card's hitherto role as bottleneck in shooting action sequences with DSLRs firing up to 10 frames per second. Active Media also says that the cards support 0–70°C operating temperatures and are rugged and reliable enough to take into the field. Capacities range from 8GB to 64GB.
Cyberoarm iView, an open-source logging and reporting solution, has recently become available in a convenient appliance form. The product caters to the logging/reporting requirements of SMBs and distributed enterprises, delivering a comprehensive view of network activity across dispersed geographical locations. Cyberoam describes the iView appliances as quick-to-deploy and easy-to-manage preloaded hardware devices with terabyte-storage space, RAID technology, redundancy and high levels of storage reliability. The appliance further enables organizations to gain complete visibility into network activity with real-time security and access reports related to top virus attacks, spam recipients, Web users and more, reinforcing organization-wide network security and data confidentiality. It also offers archiving to meet forensic requirements.
Perforce came out swinging in the new year, announcing a new version 2009.2 of its Software Configuration Management (SCM) System. SCM is a tool that versions and manages source code and digital assets for enterprises of all sizes. The most significant addition to 2009.2 is shelving—that is, real-time metadata replication and additional functionality for working off-line. This feature enables developers to cache modified files in the Perforce Server without first having to check them in as a versioned change. Users, thus, can pass pending changes to managers as part of code review or approval workflows, share works in progress with another team member or workstation, test changes in a distributed build environment, and put aside an effort when a higher priority task arrives.
Please send information about releases of Linux-related products to email@example.com or New Products c/o Linux Journal, PO Box 980985, Houston, TX 77098. Submissions are edited for length and content.
James Gray is Products Editor for Linux Journal
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?