Although Plat'Home Co., Ltd., has been serving up Linux to the Japanese market since 1992, the company is just now bringing its OpenMicroServer product to North American shores via its US subsidiary. OpenMicroServer is a small, tough, easy-to-use, easy-to-configure, low-cost Linux server solution. It provides high reliability to customers who do not have much extra room and are likely to ignore the machine for weeks or months after installation. Key features include compact design (9"x4"x1.3"), integrated Power over Ethernet, stable long-term operation up to 122°F when using PoE functionality (based on a 625-day endurance test), 400MHz AMD Alchemy (MIPS) processor, two Gigabit Ethernet ports, one 100MBit Ethernet (PoE capable) port, two USB 2.0 ports and two serial ports. Plat'Home is proud of its product's “Japanese characteristics”, meaning it doesn't stand out, and it doesn't complain. It just gets the job done.
Quicker than most to find a new and interesting open-source topic, Packt Publishing has released Deepal Jayasinghe's new book Apache Axis2. Apache Axis2 is a core engine for Web services with two different implementations: Apache Axis2/Java and Apache Axis2/C. This book takes readers through the basics of Web services and Axis2, as well as details of Axis' architecture. It is a step-by-step practical guide that uses many real-life examples. Some of the topics covered include installation, AXIOM, pipes and interceptors, module concepts, session management and more. The book assumes familiarity with Web standards, such as SOAP, WSDL and XML parsing.
Author Edward Benson's intent with his new book The Art of Rails, published by Wrox, is to pick up where the API leaves off and explain how to turn good Rails code into beautiful Rails code: simple, effective, reusable and evolvable. Benson wants you to think like a Rails developer with quality, elegance and maintainability in mind. The Art of Rails blends design and programming, identifying and describing the very latest in design patterns, programming abstractions and development methodologies that have emerged for the modern Web. Readers will learn topics such as techniques for organizing code between and within Model, View and Controller; how to think like a REST-based developer and use Rails 2.0 to translate these thoughts into code; advanced Ruby and meta-programming; design patterns for AJAX, Web APIs, HTML decomposition and schema development; and behavior-driven development. The book is designed to advance the skills of developers already familiar with Rails.
Version 1.0 of the FreeIPA Project is now official. FreeIPA is an integrated security information management solution that combines Linux (currently Fedora, Fedora Directory Server, MIT Kerberos and NTP), with a Web interface and command-line administration tools. Currently, FreeIPA supports identity management, and plans to support policy and auditing management will follow in future releases. The project developers state that the use of standard protocols, such as LDAP and Kerberos, allows for easy integration of other OSes into an IPA realm for centralized identity management. The developers also encourage testing and deployment of FreeIPA and are seeking feedback from the field.
Announcing more new games on the Linux platform is such a treat. The game developer Paradox Interactive and the two-man Swedish developer team, Frictional Games, have released a Linux version of its popular game Penumbra: Black Plague. The Penumbra series, which includes the new Penumbra: Black Plague and its prequel Penumbra: Overture, is a first-person adventure game that focuses on story, immersion and puzzles. Instead of using violence to progress, players must use their wits to guide Philip on his quest to unravel the past. Paradox says that Penumbra “is very different from other adventure games”. The games feature a 3-D engine that utilizes cutting-edge technology, and it has an advanced physics system that creates a new level of environmental interaction. Players can open drawers, pull levers, pick up objects and more, using natural mouse movements creating a highly interactive and dynamic game world. The next game in the series, Penumbra: Requiem, is due out in Summer 2008, and it also will offer a Linux version.
James Gray is Products Editor for Linux Journal
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
53 min 9 sec ago
- Reply to comment | Linux Journal
4 hours 52 min ago
- Yeah, user namespaces are
6 hours 9 min ago
- Cari Uang
9 hours 40 min ago
- user namespaces
12 hours 33 min ago
12 hours 59 min ago
- One advantage with VMs
15 hours 28 min ago
- about info
16 hours 1 min ago
16 hours 2 min ago
16 hours 3 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?