Solid-State Drives: Get One Already!

Partition alignment:

When SSDs first were released, many of the disk partitioning systems still were based on old sector-based logic for placing partitions. This could cause a problem if the partition boundary didn't line up nicely with the SSD's internal 512k block erase size. Luckily, the major partitioning tools now default to 512k-compatible ranges:

  • fdisk uses a one megabyte boundary since util-linux version 2.17.1 (January 2010).

  • LVM uses a one megabyte boundary as the default since version 2.02.73 (August 2010).

If you're curious whether your partitions are aligned to the right boundaries, here's example output from an Intel X25-M SSD with an erase block size of 512k:


~$ sudo sfdisk -d /dev/sda 
Warning: extended partition does not start at a cylinder boundary. 
DOS and Linux will interpret the contents differently. 
# partition table of /dev/sda 
unit: sectors 

/dev/sda1 : start=     2048, size=   497664, Id=83, bootable 
/dev/sda2 : start=   501758, size=155799554, Id= 5 
/dev/sda3 : start=        0, size=        0, Id= 0 
/dev/sda4 : start=        0, size=        0, Id= 0 
/dev/sda5 : start=   501760, size=155799552, Id=83 

Since the primary partition (sda5) starts and ends at a number evenly divisible by 512, things look good.

Monitoring SSDs in Linux:

I already covered running tune2fs -l <device> as a good place to get statistics on a filesystem device, but those are reset each time you reformat the filesystem. What if you want to get a longer range of statistics, at the drive level? smartctl is the tool for that. SMART (Self-Monitoring, Analysis and Report Technology) is part of the ATA standard that provides a way for drives to track and report key statistics, originally for the purposes of predicting drive failures. Because drive write volume is so important to SSDs, most manufacturers are including this in the SMART output. Run sudo smartctl -a /dev/<device> on an SSD device, and you'll get a whole host of interesting statistics. If you see the message "Not in smartctl database" in the smartctl output, try building the latest version of smartmontools.

Each vendor's label for the statistic may be different, but you should be able to find fields like "Media_Wearout_Indicator" that will count down from 100 as the drive approaches the Flash wear limit and fields like "Lifetime_Writes" or "Host_Writes_32MiB" that indicate how much data has been written to the drive (Figure 3).

Figure 3. smartctl Output (Trimmed)

Other Generic Tips

Swap: if your computer is actively using swap space, additional RAM probably is a better upgrade than an SSD. Given the fact that longevity is so tightly coupled with writes, the last thing you want is to be pumping multiple gigabytes of swap on and off the drive.

HDDs still have a role: if you have the space, you can get the best of both worlds by keeping your hard drive around. It's a great place for storing music, movies and other media that doesn't require fast I/O. Depending on how militant you want to be about SSD writes, you even can mount folders like /tmp, /var or even just /var/log on the HDD to keep SSD writes down. Linux's flexible mounting and partitioning tools make this a breeze.

SSD free space: SSDs run best when there's plenty of free space for them to use for wear leveling and garbage collection. Size up and manage your SSD to keep it less than 80% full.

Things that break TRIM: RAID setups can't pass TRIM through to the underlying drives, so use this mode with caution. In the BIOS, make sure your controller is set to AHCI mode and not IDE emulation, as IDE mode doesn't support TRIM and is slower in general.

SSD Performance

Now let's get to the heart of the matter—practical, real-world examples of how an SSD will make common tasks faster.

Test Setup

Prior to benchmarking, I had one SSD for my Linux OS, another SSD for when I needed to boot in to Windows 7 and an HDD for storing media files and for doing low-throughput, high-volume work (like debugging JVM dumps or encoding video). I used partimage to back up the HDD, and then I used a Clonezilla bootable CD to clone my Linux SSD onto the HDD. Although most sources say you don't have to worry about fragmentation on ext4, I used the ext4 defrag utility e4defrag on the HDD just to give it the best shot at keeping up with the SSD.

Here's the hardware on the development workstation I used for benchmarking—pretty standard stuff:

  • CPU: 3.3GHz Intel Core i5-2500k CPU.

  • Motherboard: Gigabyte Z68A-D3H-B3 (Z68 chipset).

  • RAM: 8GB (2x4GB) of 1333 DDR3.

  • OS: Ubuntu 12.04 LTS (64-bit, kernel 3.5.0-39).

  • SSD: 128GB OCZ Vertex4.

  • HDD: 1TB Samsung Spinpoint F3, 7200 RPM, 32MB cache.

I picked a set of ten tests to try to showcase some typical Linux operations. I cleared the disk cache after each test with echo 3 | sudo tee /proc/sys/vm/drop_caches and rebooted after completing a set. I ran the set five times for each drive, and plotted the mean plus a 95% confidence interval on the bar charts shown below.

Boot Times:

Because I'm the only user on the test workstation and use whole-disk encryption, X is set up with automatic login. Once cryptsetup prompts me for my disk password, the system will go right past the typical GDM user login to my desktop. This complicates how to measure boot times, so to get the most accurate measurements, I used the bootchart package that provides a really cool Gantt chart showing the boot time of each component (partial output shown in Figure 4). I used the Xorg process start to indicate when X starts up, the start of the Dropbox panel applet to indicate when X is usable and subtracted the time spent in cryptsetup (its duration depends more on how many tries it takes me to type in my disk password than how fast any of the disks are). The SSD crushes the competition here.

Figure 4. bootchart Output

Table 1. Boot Times

Test HDD (s) SSD (s) % Faster
Xorg Start 19.4 4.9 75%
Desktop Ready 33.4 6.6 80%

Figure 5. Boot Times

Application Start Times:

To test application start times, I measured the start times for Eclipse 4.3 (J2EE version), Team Fortress 2 (TF2) and Tomcat 7.0.42. Tomcat had four WAR files at about 50MB each to unpackage at start. Tomcat provides the server startup time in the logs, but I had to measure Eclipse and Team Fortress manually. I stopped timing Eclipse once the workspace was visible. For TF2, I used the time between pressing "Play" in the Steam client and when the TF2 "Play" menu appears.

Table 2. Application Launch Times

Test HDD (s) SSD (s) % Faster
Eclipse 26.8 11.0 59%
Tomcat 19.6 17.7 10%
TF2 72.2 67.1 7%

Figure 6. Application Launch Times

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix