At the Forge - Cassandra
The past few months, I've covered a number of different non-relational (NoSQL) databases. Such databases are becoming increasingly popular, because they offer both easier (and sometimes greater) speed and scalability than relational databases typically can provide. In most cases, they also are “schemaless”, meaning you don't need to predefine or declare the names, types and sizes of the data you are storing. This also means you can store persistent information with the ease and flexibility of a hash table.
I'm still skeptical that these non-relational databases always should be used in place of their relational counterparts. Relational databases have many years of thought, development and debugging behind them. But, relational databases are designed for reliability and for arbitrary combinations of data. NoSQL databases, by contrast, are designed for speed and scalability, without “joins” and other items that are a central pillar of relational queries.
Thus, I've come to believe that relational databases still have an important role to play in the computer world, and even in the world of high-powered Web applications. However, just as the introduction of built-in strings, arrays, hash tables and other sophisticated data structures have made life easier for countless programmers, I feel that non-relational databases have an important role to play, offering developers a new mix of interesting and useful ways to store and retrieve data.
To date, I have explored several non-relational systems in this column. CouchDB and MongoDB are both “document” databases, meaning they basically allow you to store collections of name-value pairs (hashes, if you like) and then retrieve elements from those collections using various types of queries. CouchDB and MongoDB are quite different in how they store and retrieve data, and they also approach replication differently.
Both CouchDB and MongoDB are closer in style and spirit to one another than to the system I covered last month, Redis—a key-value store that's extremely fast but limits you to querying on a particular key, and with a limited set of data types. Plus, Redis assumes you have a single server. Although you can replicate to a secondary server, there is no partitioning of the data or the load among more than one node.
Cassandra is a little like all of these, and yet it's quite different from any of them. Cassandra stores data in what can be considered a multilevel (or multidimensional) hash table. You can retrieve information according to the keys, making it like a key-value store, like Redis or Memcached. But, Cassandra allows you to ask for a range of keys, giving it a bit of extra flexibility. Moreover, the multidimensional nature of Cassandra, its use of “super columns” to store multiple items of a similar type and its storage of name-value pairs at the bottom level provide a fair amount of flexibility.
Cassandra really shines when it comes to many aspects of scalability. You can add nodes, and Cassandra integrates them into the storage system seamlessly. Nodes can die or be removed, and the system handles that appropriately. All nodes eventually contain all data, meaning even if you kill off all but one of the nodes in a Cassandra storage cluster, the system should continue to run seamlessly. Because writes are distributed across the different nodes, it takes a very short time to write new data to Cassandra.
It's clear that Cassandra has resonated with a large number of developers. The project started at Facebook, in order to solve the problem of searching through users' inboxes. Facebook donated the code to the Apache Project, which has since promoted it and made it a first-class project. Facebook no longer participates in the open-source version of Cassandra, but apparently Facebook still uses it on its systems. Meanwhile, companies including Rackspace, Twitter and Digg all have become active and prominent Cassandra users, contributing code and contributing to the general sense of momentum that Cassandra offers.
Perhaps the two biggest hurdles I've had to overcome in working with Cassandra are the unusual terminology and the configuration and administration that are necessary. The terminology is difficult in part because it uses existing terms (“column” and “row”, for example) in ways that differ from what I'm used to with relational databases. It's not hard, but does take some getting used to. (Although the developers might have done everyone a favor by avoiding such terms as “column families” and “super columns”.) The configuration aspects aren't terribly onerous, but perhaps point to how spoiled people have gotten when working with non-relational databases. The fact that I have to name my keyspaces and column families in a configuration file, and then restart Cassandra so that their definition will take effect, seems like a throwback to older, more rigid systems. However, relational databases force us to define our tables, columns and data types before we can use them, and it never seemed like a terrible burden. And, it seems that part of the secret of Cassandra's speed and reliability is the fact that its data structures are rigidly defined.
This month, I take an initial look at getting Cassandra up and running and explain how to store and retrieve data inside a simple Cassandra instance.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?