One Box. Sixteen Trillion Bytes.

Build your own 16-Terabyte file server with hardware RAID.

I recently had the need for a lot of disk space, and I decided to build a 16TB server on my own from off-the-shelf parts. This turned out to be a rewarding project, as it involved many interesting topics, including hardware RAID, XFS, SATA and system management issues involved with large filesystems.

Project Goals

I wanted to consolidate several Linux file servers that I use for disk-to-disk backups. These were all in the 3–4TB range and were constantly running out of space, requiring me either to adjust which systems were being backed up to which server or to reduce the number of previous backups that I could keep on hand. My overall goal for this project was to create a system with a large amount of cheap, fast and reliable disk space. This system would be the destination for a number of daily disk-to-disk backups from a mix of Solaris, Linux and Windows servers. I am familiar with Linux's software RAID and LVM2 features, but I specifically wanted hardware RAID, so the OS would be “unaware” of the RAID controller. These features certainly cost more than a software-based RAID system, and this article is not about creating the cheapest possible solution for a given amount of disk space.

The hardware RAID controller would make it as simple as possible for a non-Linux administrator to replace a failed disk. The RAID controller would send an e-mail message warning about a disk failure, and the administrator typically would respond by identifying the location of the failed disk and replacing it, all with no downtime and no Linux administration skills required. The entire disk replacement experience would be limited to the Web interface of the RAID controller card.

In reality, a hot spare disk would replace any failed disk automatically, but use of the RAID Web interface still would be required to designate any newly inserted disk as the replacement hot spare. For my company, I had specific concerns about the availability of Linux administration skills that justified the expense of hardware RAID.

Hardware Choices

For me, the above requirements meant using hot-swappable 1TB SATA drives with a fast RAID controller in a system with a decent CPU, adequate memory and redundant power supplies. The chassis had to be rack-mountable and easy to service. Noise was not a factor, as this system would be in a dedicated machine room with more than one hundred other servers.

I decided to build the system around the 3ware 9560 16-port RAID controller, which requires a motherboard that has a PCI Express slot with enough “lanes” (eight in this instance). Other than this, I did not care too much about the CPU choice or integrated motherboard features (other than Gigabit Ethernet). As I had decided on 16 disks, this choice pretty much dictated a 3U or larger chassis for front-mounted hot-swap disks. This also meant there was plenty of room for a full-height PCI card in the chassis.

I have built the vast majority of my rackmount servers (more than a hundred) using Supermicro hardware, so I am quite comfortable with its product line. In the past, I have always used Supermicro's “bare-bones” units, which had the motherboard, power supply, fans and chassis already integrated.

For this project, I could not find a prebuilt bare-bones model with the exact feature set I required. I was looking for a system that had lots of cheap disk capacity, but did not require lots of CPU power and memory capacity—most high-end configurations seemed to assume quad-core CPUs, lots of memory and SAS disks. The Supermicro SC836TQ-R800B chassis looked like a good fit to me, as it contained 16 SATA drives in a 3U enclosure and had redundant power supplies (the B suffix indicates a black-colored front panel).

Next, I selected the X7DBE motherboard. This model would allow me to use a relatively inexpensive dual-core Xeon CPU and have eight slots available for memory. I could put in 8GB of RAM using cheap 1GB modules. I chose to use a single 1.6GHz Intel dual-core Xeon for the processor, as I didn't think I could justify the cost of multiple CPUs or top-of-the-line quad-core models for the file server role.

I double-checked the description of the Supermicro chassis to make sure that the CPU heat sink is included with the chassis. For the SC836TQ-R800B, the heat sink had to be ordered separately.

Figure 1. Front View of the Server Chassis



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

A Problem with device driver programming

mety's picture

i am writing a driver for a board which uses AMCC s5935 and a EEPROM.
i use lspci -x to see the card`s information but i see the wrong values. i have tested it on MS DOS and Windows but i see this wrong values again.
once i prepared a Windriver for this card and i saw the correct values . i wanna write a program on linux and i need help .what should i do?
i see a wrong number on Base Address Register0.
but it must be something else.
i must add BaseAddressRegister0 with 0x3c and read its address but i read something wrong and i am confused.
i would be very great full if u could help me.

speed test on similar system

hjmangalam's picture

Readers of this might be interested in some benchmarks on a similar system broken down by filesystem, types of apps, #s of disks, types of RAID, etc.

The Storage Brick - Fast, Cheap, Reliable Terabytes