Under the paradigm “Really Simple Integration”, the firm SnapLogic has released SnapLogic for Amazon Elastic Compute Cloud (EC2), a new variant of its open-source data integration framework. SnapLogic for EC2 “provides Amazon Web Services users with a convenient SnapLogic deployment option that scales easily and eliminates the costs of acquiring and maintaining expensive server hardware”. SnapLogic for EC2 also makes it easier than ever to “easily integrate data in the cloud with data behind the firewall”. Offered in two editions, a GPL'd Community Edition and a commercial Enterprise Edition, SnapLogic enables enterprises quickly and easily to make data from databases, SaaS applications, SOA Web services and other common data sources. The Really Simple Integration paradigm allows knowledge workers to use familiar tools, such as Web browsers, Google and Excel to discover, consume, transform and publish enterprise data, creating a virtuous cycle of self-service data access and distribution.
The latest iSCSI solution from iStor is the iS512-10G, a 10Gb model of the iS512 integraStor storage system, which iStor calls “the world's fastest scalable iSCSI storage array optimized for small and medium businesses”. This second-generation 10GbE iSCSI storage array offers native 10Gbps architecture supporting full duplex wire speed data rates in excess of 1,100MB/sec and is “2.5 times faster and significantly less expensive than 4G Fibre Channel”, says iStor. iStor also notes that mass adoption of 10GbE is close to or perhaps at its tipping point, given the cost per Gbps of 10GbE ports dropping below that for 1GbE ports, as well as the rate of server consolidation driven by virtualization.
Got “dirty” data? Skip the Pine-Sol and opt for Talend's Open Profiler. Open Profiler is an open-source data profiler, which enables companies to assess the quality of data and decide which actions must be taken to correct the dirty data that irritates customers and costs companies time and money. “Data profiling is the first step to achieving reliable, trustworthy data”, says Talend. Such profiling reduces the time and resources needed to find problematic data and allows companies to identify potential problems before beginning data-intensive projects, such as data integration or new application development. It also allows business analysts to have more control over the maintenance and management of the data.
Arkeia is expanding its appliance business with the new EdgeFort 500 Series, an all-in-one, hardware and software backup system. This set of appliances comes standard with 5TB virtual tape library (expandable to 10TB), disk-to-disk-to-tape management software, Fibre Channel connectivity and is fully integrated with Arkeia's network backup software. Arkeia's federated data management architecture allows remote and centralized data protection, making it possible for remote offices and branch offices to back up, restore and archive critical data, with no local IT resource needed. The EdgeFort 500 series is for the largest data centers, while the earlier 100, 200 and 300 models were for small, medium and large ones.
Although HP's Tru64 UNIX Advanced File System (AdvFS) has been available for more than 16 years, the big news is the recent contribution of its source code to the Open Source community. HP states that “the AdvFS source code includes capabilities that increase uptime, enhance security and help ensure maximum performance of Linux filesystems”. HP will contribute the code as a reference implementation of an enterprise Linux filesystem under the terms of General Public License Version 2 for compatibility with the Linux kernel. In addition, HP will provide design documentation, test suites and engineering resources. HP further hopes that the source code will serve as a technology base to advance ongoing development of Linux by providing a comprehensive foundation for Linux kernel developers to leverage and improve Linux filesystem functionality.
James Gray is Products Editor for Linux Journal
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Three More Lessons
- Django Models and Migrations
- August 2015 Issue of Linux Journal: Programming
- Hacking a Safe with Bash
- Secure Server Deployments in Hostile Territory, Part II
- The Controversy Behind Canonical's Intellectual Property Policy
- Huge Package Overhaul for Debian and Ubuntu
- Shashlik - a Tasty New Android Simulator
- Embed Linux in Monitoring and Control Systems
- General Relativity in Python