Under the paradigm “Really Simple Integration”, the firm SnapLogic has released SnapLogic for Amazon Elastic Compute Cloud (EC2), a new variant of its open-source data integration framework. SnapLogic for EC2 “provides Amazon Web Services users with a convenient SnapLogic deployment option that scales easily and eliminates the costs of acquiring and maintaining expensive server hardware”. SnapLogic for EC2 also makes it easier than ever to “easily integrate data in the cloud with data behind the firewall”. Offered in two editions, a GPL'd Community Edition and a commercial Enterprise Edition, SnapLogic enables enterprises quickly and easily to make data from databases, SaaS applications, SOA Web services and other common data sources. The Really Simple Integration paradigm allows knowledge workers to use familiar tools, such as Web browsers, Google and Excel to discover, consume, transform and publish enterprise data, creating a virtuous cycle of self-service data access and distribution.
The latest iSCSI solution from iStor is the iS512-10G, a 10Gb model of the iS512 integraStor storage system, which iStor calls “the world's fastest scalable iSCSI storage array optimized for small and medium businesses”. This second-generation 10GbE iSCSI storage array offers native 10Gbps architecture supporting full duplex wire speed data rates in excess of 1,100MB/sec and is “2.5 times faster and significantly less expensive than 4G Fibre Channel”, says iStor. iStor also notes that mass adoption of 10GbE is close to or perhaps at its tipping point, given the cost per Gbps of 10GbE ports dropping below that for 1GbE ports, as well as the rate of server consolidation driven by virtualization.
Got “dirty” data? Skip the Pine-Sol and opt for Talend's Open Profiler. Open Profiler is an open-source data profiler, which enables companies to assess the quality of data and decide which actions must be taken to correct the dirty data that irritates customers and costs companies time and money. “Data profiling is the first step to achieving reliable, trustworthy data”, says Talend. Such profiling reduces the time and resources needed to find problematic data and allows companies to identify potential problems before beginning data-intensive projects, such as data integration or new application development. It also allows business analysts to have more control over the maintenance and management of the data.
Arkeia is expanding its appliance business with the new EdgeFort 500 Series, an all-in-one, hardware and software backup system. This set of appliances comes standard with 5TB virtual tape library (expandable to 10TB), disk-to-disk-to-tape management software, Fibre Channel connectivity and is fully integrated with Arkeia's network backup software. Arkeia's federated data management architecture allows remote and centralized data protection, making it possible for remote offices and branch offices to back up, restore and archive critical data, with no local IT resource needed. The EdgeFort 500 series is for the largest data centers, while the earlier 100, 200 and 300 models were for small, medium and large ones.
Although HP's Tru64 UNIX Advanced File System (AdvFS) has been available for more than 16 years, the big news is the recent contribution of its source code to the Open Source community. HP states that “the AdvFS source code includes capabilities that increase uptime, enhance security and help ensure maximum performance of Linux filesystems”. HP will contribute the code as a reference implementation of an enterprise Linux filesystem under the terms of General Public License Version 2 for compatibility with the Linux kernel. In addition, HP will provide design documentation, test suites and engineering resources. HP further hopes that the source code will serve as a technology base to advance ongoing development of Linux by providing a comprehensive foundation for Linux kernel developers to leverage and improve Linux filesystem functionality.
James Gray is Products Editor for Linux Journal
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
9 hours 40 min ago
- BASH script to log IPs on public web server
14 hours 7 min ago
17 hours 43 min ago
- Reply to comment | Linux Journal
18 hours 16 min ago
- All the articles you talked
20 hours 39 min ago
- All the articles you talked
20 hours 42 min ago
- All the articles you talked
20 hours 44 min ago
1 day 1 hour ago
- Keeping track of IP address
1 day 2 hours ago
- Roll your own dynamic dns
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?