Solid-State Drives: Get One Already!
I've been building computers since the 1990s, so I've seen a lot of new technologies work their way into the mainstream. Most were the steady, incremental improvements predicted by Moore's law, but others were game-changers, innovations that really rocketed performance forward in a surprising way. I remember booting up Quake after installing my first 3-D card—what a difference! My first boot off a solid-state drive (SSD) brought back that same feeling—wow, what a difference!
However, at a recent gathering of like-minded Linux users, I learned that many of my peers hadn't actually made the move to SSDs yet. Within that group, the primary reluctance to try a SSD boiled down to three main concerns:
I'm worried about their reliability; I hear they wear out.
I'm not sure if they work well with Linux.
I'm not sure an SSD really would make much of a difference on my system.
Luckily, these three concerns are based either on misunderstandings, outdated data, exaggeration or are just not correct.
SSD Reliability Overview
How SSDs Differ from Hard Drives:
Traditional hard disk drives (HDDs) have two mechanical delays that can come into play when reading or writing files: pivoting the read/write head to be at the right radius and waiting until the platter rotates until the start of the file reaches the head (Figure 1). The time it takes for the drive to get in place to read a new file is called seek time. When you hear that unique hard drive chatter, that's the actuator arm moving around to access lots of different file locations. For example, my hard drive (a pretty typical 7,200 RPM consumer drive from 2011) has an average seek time of around 9ms.
Figure 1. Hard Drive
Instead of rotating platters and read/write heads, solid-state drives store data to an array of Flash memory chips. As a result, when a new file is requested, the SSD's internal memory can find and start accessing the correct storage memory locations in sub-milliseconds. Although reading from Flash isn't terribly fast by itself, SSDs can read from several different chips in parallel to boost performance. This parallelism and the near-instantaneous seek times make solid-state drives significantly faster than hard drives in most benchmarks. My SSD (a pretty typical unit from 2012) has a seek time of 0.1ms—quite an improvement!
Reliability and Longevity:
Reliability numbers comparing HDDs and SSDs are surprisingly hard to find. Fail rate comparisons either didn't have enough years of data, or were based on old first-generation SSDs that don't represent drives currently on the market. Though SSDs reap the benefits of not having any moving parts (especially beneficial for mobile devices like laptops), the conventional wisdom is that current SSD fail rates are close to HDDs. Even if they're a few percentage points higher or lower, considering that both drive types have a nonzero failure rate, you're going to need to have a backup solution in either case.
Apart from reliability, SSDs do have a unique longevity issue, as the NAND Flash cells in storage have a unique life expectancy limitation. The longevity of each cell depends on what type of cell it is. Currently, there are three types of NAND Flash cells:
SLC (Single Later Cell) NAND: one bit per cell, ~100k writes.
MLC (Multi-Layer Cell) NAND: two bits per cell, ~10k to 3k writes, slower than SLC. The range in writes depends on the physical size of the cell—smaller cells are cheaper to manufacture, but can handle fewer writes.
TLC (Three-Layer Cell) NAND: ~1k writes, slower than MLC.
Interestingly, all three types of cells are using the same transistor structure behind the scenes. Clever engineers have found a way to make that single Flash cell hold more information in MLC or TLC mode, however. At programming time, they can use a low, medium-low, medium-high or high voltage to represent four unique states (two bits) in one single cell. The downside is that as the cell is written several thousand times, the oxide insulator at the bottom of the floating gate starts to degrade, and the amount of voltage required for each state increases (Figure 2). For SLC it's not a huge deal because the gap between states is so big, but for MLC, there are four states instead of two, so the amount of room between each state's voltage is shortened. For TLC's three bits of information there are six states, so the distances between each voltage range is even shorter.
Figure 2. A NAND Flash Cell
|The True Internet of Things||Sep 02, 2015|
|September 2015 Issue of Linux Journal: HOW-TOs||Sep 01, 2015|
|September 2015 Video Preview||Sep 01, 2015|
|Using tshark to Watch and Inspect Network Traffic||Aug 31, 2015|
|Where's That Pesky Hidden Word?||Aug 28, 2015|
|A Project to Guarantee Better Security for Open-Source Projects||Aug 27, 2015|
- Using tshark to Watch and Inspect Network Traffic
- September 2015 Issue of Linux Journal: HOW-TOs
- The True Internet of Things
- Problems with Ubuntu's Software Center and How Canonical Plans to Fix Them
- Concerning Containers' Connections: on Docker Networking
- Firefox Security Exploit Targets Linux Users and Web Developers
- Where's That Pesky Hidden Word?
- A Project to Guarantee Better Security for Open-Source Projects
- Build a “Virtual SuperComputer” with Process Virtualization
- My Network Go-Bag