One Box. Sixteen Trillion Bytes.
I recently had the need for a lot of disk space, and I decided to build a 16TB server on my own from off-the-shelf parts. This turned out to be a rewarding project, as it involved many interesting topics, including hardware RAID, XFS, SATA and system management issues involved with large filesystems.
I wanted to consolidate several Linux file servers that I use for disk-to-disk backups. These were all in the 3–4TB range and were constantly running out of space, requiring me either to adjust which systems were being backed up to which server or to reduce the number of previous backups that I could keep on hand. My overall goal for this project was to create a system with a large amount of cheap, fast and reliable disk space. This system would be the destination for a number of daily disk-to-disk backups from a mix of Solaris, Linux and Windows servers. I am familiar with Linux's software RAID and LVM2 features, but I specifically wanted hardware RAID, so the OS would be “unaware” of the RAID controller. These features certainly cost more than a software-based RAID system, and this article is not about creating the cheapest possible solution for a given amount of disk space.
The hardware RAID controller would make it as simple as possible for a non-Linux administrator to replace a failed disk. The RAID controller would send an e-mail message warning about a disk failure, and the administrator typically would respond by identifying the location of the failed disk and replacing it, all with no downtime and no Linux administration skills required. The entire disk replacement experience would be limited to the Web interface of the RAID controller card.
In reality, a hot spare disk would replace any failed disk automatically, but use of the RAID Web interface still would be required to designate any newly inserted disk as the replacement hot spare. For my company, I had specific concerns about the availability of Linux administration skills that justified the expense of hardware RAID.
For me, the above requirements meant using hot-swappable 1TB SATA drives with a fast RAID controller in a system with a decent CPU, adequate memory and redundant power supplies. The chassis had to be rack-mountable and easy to service. Noise was not a factor, as this system would be in a dedicated machine room with more than one hundred other servers.
I decided to build the system around the 3ware 9560 16-port RAID controller, which requires a motherboard that has a PCI Express slot with enough “lanes” (eight in this instance). Other than this, I did not care too much about the CPU choice or integrated motherboard features (other than Gigabit Ethernet). As I had decided on 16 disks, this choice pretty much dictated a 3U or larger chassis for front-mounted hot-swap disks. This also meant there was plenty of room for a full-height PCI card in the chassis.
I have built the vast majority of my rackmount servers (more than a hundred) using Supermicro hardware, so I am quite comfortable with its product line. In the past, I have always used Supermicro's “bare-bones” units, which had the motherboard, power supply, fans and chassis already integrated.
For this project, I could not find a prebuilt bare-bones model with the exact feature set I required. I was looking for a system that had lots of cheap disk capacity, but did not require lots of CPU power and memory capacity—most high-end configurations seemed to assume quad-core CPUs, lots of memory and SAS disks. The Supermicro SC836TQ-R800B chassis looked like a good fit to me, as it contained 16 SATA drives in a 3U enclosure and had redundant power supplies (the B suffix indicates a black-colored front panel).
Next, I selected the X7DBE motherboard. This model would allow me to use a relatively inexpensive dual-core Xeon CPU and have eight slots available for memory. I could put in 8GB of RAM using cheap 1GB modules. I chose to use a single 1.6GHz Intel dual-core Xeon for the processor, as I didn't think I could justify the cost of multiple CPUs or top-of-the-line quad-core models for the file server role.
I double-checked the description of the Supermicro chassis to make sure that the CPU heat sink is included with the chassis. For the SC836TQ-R800B, the heat sink had to be ordered separately.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide