Former President Nixon would have balked at Enkive, a new open-source e-mail archiving and retrieval application from The Linux Box. That's because Enkive captures e-mail messages as they arrive or are sent to ensure they are retained before a worker can delete them in an e-mail client. This feature helps organizations address the issues of compliance with laws and regulations governing communications, as well as litigation support. It permits recovery of e-mail in full support of an organization's retention policies. In addition, storage costs are reduced by eliminating the capture of redundant messages and attachments.
The team at RackForce has announced availability of ddsCloud Enterprise, an enterprise-level hosted private cloud solution. RackForce describes ddsCloud Enterprise as a fully virtualized network, storage and compute capacity in an on-demand model that utilizes best-in-class technologies from Cisco, IBM, Microsoft and VMware. Built on RackForce's new state-of-the-art GigaCenter infrastructure, the firm says the results are “unprecedented scalability, flexibility and greenness”. ddsCloud Enterprise leverages virtualization and unified fabric to combine computing, network and storage into one seamless system. When compared with previous computing models, RackForce asserts that it has seen deployment times reduced by 85%, customer costs by up to 30% and a carbon footprint merely 1/50th the size of other cloud offerings located in conventional North American data centers.
The editorial duo of Erik Hatcher and Otis Gospodnetic has updated the book Lucene in Action from Manning Publications to a new 2nd edition. The 500-pager is touted as the definitive guide to Lucene, an open-source, highly scalable, super-fast search engine that developers can conveniently integrate into applications. Since the first edition, Lucene has grown from a nice-to-have feature into an indispensable part of most enterprise apps. The book explores how to index documents; introduces searching, sorting and filtering; and covers the numerous changes to Lucene since the first edition. All source code has been updated to current Lucene 2.3 APIs.
Publisher Wiley calls A History of International Research Networking “the first book written and edited by the people who developed the Internet”, and it covers the history of creating universal protocols and a global data transfer network. Editors Howard Davies and Beatrice Bressan, two veterans of the CERN particle physics research lab, are two of many insiders who contribute with perspectives never before published on the historic, technical development of today's indispensable Internet.
The company cPacket is now marketing the cVu320G network appliance, a solution for data centers, service providers and telecommunications that enables on-demand capacity management, resource allocation and real-time troubleshooting of bursts and spikes. The cVu320G provides complete packet inspection filtering, flexible traffic aggregation, selective duplication and flow-based load balancing, as well as granular, wire-speed performance monitoring for 32 10-Gigabit links. cPacket's rationale for the application is threefold: first, today's data centers struggle with the growing stampede to 10 Gigabit and the increasing virtualization of platforms and services; second, monitoring tools have not kept pace with these developments, and, as a consequence, data centers are being overwhelmed with huge volumes of complex traffic, which they no longer have the visibility to control; and third, the consequences include intermittent and frequent congestion, performance degradation and major service disruptions to end users that are becoming increasingly common. The solution is based on cPacket's unique, 20-Gigabit “complete packet inspection” chips and Marvell's 10-Gigabit Prestera switch.
James Gray is Products Editor for Linux Journal.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide