Data in a Flash, Part IV: the Future of Memory Technologies

I have spent the first three parts of this series describing the evolution and current state of Flash storage. I also described how to configure an NVMe over Fabric (NVMeoF) storage network to export NVMe volumes across RDMA over Converged Ethernet (RoCE) and again over native TCP. [See Petros' "Data in a Flash, Part I: the Evolution of Disk Storage and an Introduction to NVMe", "Data in a Flash, Part II: Using NVMe Drives and Creating an NVMe over Fabrics Network" and "Data in a Flash, Part III: NVMe over Fabrics Using TCP".]

But what does the future of memory technologies look like? With traditional Flash technologies that are enabled via NVMe, you should continue to expect higher capacities. For instance, what comes after QLC or Quad-Level Cells NAND technology? Only time will tell. The next-generation NVMe specification will introduce a protocol standard operating across more PCI Express lanes and at a higher bandwidth. As memory technologies continue to evolve, the method in which you plug that technology into your computers will evolve with it.

Remember, the ultimate goal is to move closer to the CPU and reduce access times (that is, latencies).

""

Figure 1. The Data Performance Gap as You Move Further Away from the CPU

Storage Class Memory

For years, vendors have been developing a technology in which you are able to plug persistent memory into traditional DIMM slots. Yes, these are the very same slots that volatile DRAM also uses. Storage Class Memory (SCM) is a newer hybrid storage tier. It's not exactly memory, and it's also not exactly storage. It lives closer to the CPU and comes in two forms: 1) traditional DRAM backed by a large capacitor to preserve data to a local NAND chip (for example, NVDIMM-N) and 2) a complete NAND module (NVDIMM-F). In the first case, you retain DRAM speeds, but you don't get the capacity. Typically, a DRAM-based NVDIMM is behind the latest traditional DRAM sizes. Vendors such as Viking Technology and Netlist are the main producers of DRAM-based NVDIMM products.

The second, however, will give you the larger capacity sizes, but it's not nearly as fast as DRAM speeds. Here, you will find your standard NAND—the very same as found in modern Solid State Drives (SSDs) fixed onto your traditional DIMM modules.

This type of memory does not register as traditional memory to the CPU, and as of the DDR4 specification standard, modern motherboards and processors are able to use such technologies without any special microcode or firmware. When the operating system loads on a system containing such memory, it isolates it into a "protected" mode category (for example, 0xe820), and it won't make use of it like standard volatile DRAM. Instead, it will access said memory only via a driver interface. The Persistent Memory or pmem Linux module is that interface. Using this module, you can map memory regions of these SCM devices into userspace-accessible block devices.

Current applications use SCM for in-memory databases, high performance computing (HPC) and artificial intelligence (AI) workloads, and also as a persistent cache, although it doesn't have to be limited to those things. As NVMeoF continues to mature, it'll allow for you to export SCM devices across a storage network.

Intel's Optane, Samsung's Z-SSD (and Others)

Somewhere in between DRAM and traditional SSD are emerging technologies such as Intel's Optane (originally built in collaboration with Micron and named 3D-XPoint) and Samsung's Z-SSD. These technologies are very new, and not much is known about them except for the fact that they're neither DRAM nor NAND. In the case of Intel's Optane, it's a new persistent memory technology, and it's believed that it relies on Phase-Change Memory (PCM). Optane performs better than NAND but not nearly as well as DRAM. Another advantage is that it has better endurance or cell life than NAND—that is, it's capable of more drive writes per day (DWPD) than your standard NAND SSD.

Computational Storage

Often, the latency introduced between an application and the data it needs to access is too long, or the CPU cycles required to host that application consume too many resources on the host machine, introducing additional latencies to the drive itself. What does one do to avoid such negative impacts? Instead, one moves the application to the physical drive itself. This is a more recent emerging trend, and it's referred to as Computational Storage.

Standing at the forefront of said technology are NGD Systems, ScaleFlux and even Samsung. So, what is Computational Storage? And, how is it implemented?

The idea is to relocate data processing into the data storage layer and avoid moving the data into the computer's main memory (originally to be processed by the host CPU). Think about it. On a traditional system, it takes resources to move data from where it is stored, process it and then move it back to the same storage target. The entire process will take time and will introduce access latencies—even more so if the host system is tending to other related (or unrelated) tasks. In addition, the larger the data set, the more time it will take to move in/out.

To address this pain point, a few vendors have started to integrate an embedded microprocessor into the controller of their NVMe SSDs. The processor will run a standard operating system (such as Ubuntu Linux) and will allow a piece of software to run locally on that SSD for in situ computing.

Today's Challenges

What are the challenges that memory technologies face, preventing wider adoption, in today's market? The first is price per gigabyte. While a hard disk drive (HDD) costs $0.03–$0.06 per gigabyte, a NAND-based SSD is approximately $0.13–$0.15 per gigabyte. In the grand scheme of things, that may not sound like much but at scale, but it makes a world of difference. Imagine trying to fill a data center with SSDs instead of HDDs. It will get expensive.

Another category where HDD continues to outperform SSD technologies is in capacities across standard form factors. You can fit only so many terabytes of storage into your standard server. And, you can fit much more HDD storage than you can SSD. As memory technologies evolve, this will likely change in the coming years.

Another place where SSDs struggle is in the software application realm. Many software applications do not conform to the most optimal methods of which one needs to access NAND memory. These applications will both increase drive access latencies and reduce the cell life of the NAND.

Summary

As it relates to memory technologies, the future looks very promising and very exciting. Will the SSD completely replace the traditional spinning HDD? I doubt it. Look at tape technology. It's still around and continues to find a place in the archival storage space. The HDD most likely will have a similar fate. Although until then, the HDD will continue to compete with the SSD in both price and capacity.

Petros Koutoupis, LJ Editor at Large, is currently a senior performance software engineer at Cray for its Lustre High Performance File System division. He is also the creator and maintainer of the RapidDisk Project. Petros has worked in the data storage industry for well over a decade and has helped pioneer the many technologies unleashed in the wild today.

Load Disqus comments