BOXX Technologies announced a new line of 3-D rendering workstations based on dual Opteron processors; machines built on the 240, 242 and 244 processors are now available. M4 workstations use NVIDIA Quadro architecture for modeling and rendering 3-D content and animation with Maya, 3ds max, SOFTIMAGE XSI, LightWave 3D and Houdini. The standard workstation includes the AMD-8111 HyperTransport PCI tunnel, the AMD-8151 HyperTransport AGP tunnel, 128-bit dual-channel memory bus, up to 8GB ECC registered 333MHz DDR, four DIMM slots, dual channel UltraDMA 133 IDE controller and six channel audio. Custom-configured workstations also are available. The workstations have lightweight aluminum chassis for heat dissipation, and two 92mm fans provide airflow.
BOXX Technologies, Inc., 10435 South Burnet Road, Suite 120, Austin, Texas 78758, 877-877-2699, www.boxxtech.com.
The first products in Interphase's new network security product line are the 45NS (PMC) and 55NS (PCI) network security acceleration adapters. Designed to eliminate traffic bottlenecks caused by VPNs, gateways, routers and firewalls, the accelerators off-load bandwidth-intensive IPSec processing from the host CPU. The accelerators handle header analysis, payload exraction, compression, encryption, authentication and packet assembly. Both adapters offer 500Mbps 3DES throughput and accelerate DES, MD5, SHA-1, RC4 and AES security algorithms. They also offer a 64-bit 66MHz PCI bus, 64MB of private memory and support for full duplex OC-3 rates and 512K simultaneous sessions.
Interphase, Parkway Centre, Phase 1, 2901 North Dallas Parkway, Suite 200, Plano, Texas 75093, 800-327-8638, www.interphase.com.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- RSS Feeds
- The Secret Password Is...
- New Products
4 hours 23 min ago
- Keeping track of IP address
6 hours 14 min ago
- Roll your own dynamic dns
11 hours 28 min ago
- Please correct the URL for Salt Stack's web site
14 hours 39 min ago
- Android is Linux -- why no better inter-operation
16 hours 55 min ago
- Connecting Android device to desktop Linux via USB
17 hours 23 min ago
- Find new cell phone and tablet pc
18 hours 21 min ago
19 hours 50 min ago
- Automatically updating Guest Additions
20 hours 59 min ago
- I like your topic on android
21 hours 45 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?