At the Forge - Memcached

 in
Want to make sure your application will scale? Consider memcached, which allows you to speed up response time, as well as reduce the load on your database server.
Memcached

As I mentioned previously, you can think of memcached as a network-accessible hash table. Like a hash table, it has keys and values, with a single value stored per key. Also like a hash table, there aren't a lot of ways to store and retrieve your data. You can set a key-value pair; you can retrieve a value based on a key, and you can delete a key.

This might seem like a limited set of functions. And, it is, if you think of memcached as your primary data store. But, that's exactly the point. Memcached never was designed to be a general-purpose database or to serve as the primary persistent storage mechanism for your application. Rather, it was meant to cache information that you already had retrieved from a relational database and that you probably were going to need to retrieve again in the near future.

In other words, memcached allows you to make your application more scalable, letting you take advantage of the fact that data is fetched repeatedly from the database, often by multiple users. By first querying memcached and accessing the database only when necessary, you reduce the load on your database and increase the effective speed of your Web application.

The main cost to you is the time involved in integrating memcached into your application, the RAM that you allocate to memcached and the server(s) that you dedicate to memcached. How many servers you will want to allocate to memcached depends, of course, on the size and scale of your Web site. You might need only one memcached server when you start out, but you might well need to expand to ten, 100 or even several hundred memcached servers (as I've heard Facebook uses) to maximize application speed and efficiency.

Using Memcached

On my Ubuntu system, I was able to install memcached with:

apt-get install memcached

Then, I started memcached with:

/usr/bin/memcached -vv -u reuven

The -vv option turns on very verbose logging, allowing me to see precisely what is happening from the server's perspective. The -u flag lets me set the user under which memcached will run; it cannot be run as root, for security reasons.

Now, let's write a short client program to store and retrieve values. I'm going to write the client program in Ruby, although you can use almost any language (including Perl, Python or PHP) that you like. I used the memcache-client Ruby gem to connect to the memcached server, which I installed by typing:

sudo gem install memcache-client

Here is a short program that connects to the memcached server, stores a value and then retrieves a value:

#!/usr/bin/ruby

# Load necessary libraries
require 'rubygems'
require 'memcache'

# Create the memcached client
CACHE = MemCache.new 'localhost:11211'

# Set a value
CACHE.set('foo', 'bar')

# Retrieve a value
value = CACHE.get('foo')
puts "Value = '#{value}'"

As you can see, the first thing we do is create a client to the memcached server. You can specify one or more servers; in this case, we indicate that there is only one, running on localhost, on port 11211. It might surprise you to learn that although memcached is described as a distributed caching mechanism, the various memcached servers never speak to one another. Rather, it is the client that decides on which server it will store a particular piece of data, and it uses that same algorithm to determine which server should be queried to retrieve that data.

So in this program, we connect to our server, set a value (much as we would set it in a hash table) and then retrieve it. It's nothing very exciting, although the fact that the memcached server might be on another computer already makes things interesting.

Here is a slight variation on the previous program. Notice the third argument to CACHE.set, as well as the invocation of sleep afterward:

#!/usr/bin/ruby

require 'rubygems'
require 'memcache'

CACHE = MemCache.new 'localhost:11211'

CACHE.set('foo', 'bar', 3)

sleep 5

value = CACHE.get('foo')
puts "Value = '#{value}'"

This time, the output looks like this:

Value = ''

Huh? What happened to our value? Didn't we set it? Yes, we did, but we told memcached to expire the value after three seconds. This is one important way that memcached makes it easy to be integrated into a Web application. You can specify how long memcached should continue to see this data as valid. By passing no expiration time, memcached holds onto the value forever. Allowing the data to expire ensures that cached data is valid.

Just how long you should keep data in the cache is a question only you can answer, and it probably depends on the type of object you're storing. Orders from your on-line store probably should expire after a short period, because they likely will change as users visit your site. But, information about users is unlikely to change once they have registered, so it might make sense to hold onto that for a longer period of time.

It might seem strange for me to be describing memcached as a repository for complex objects, such as orders or people. And yet, memcached is fully able to handle such objects, assuming they are marshaled and unmarshaled by the client software. Thus, we can have the following short program:

#!/usr/bin/ruby

require 'rubygems'
require 'memcache'

CACHE = MemCache.new 'localhost:11211'

CACHE.set('foo', [:a, :b, 'c', [1,2,3],
        {:blah => 5, :blahblah => 10}, Time.now])

value = CACHE.get('foo')
puts "Value = '#{value.map{ |i| i.class}.join(', ')}'"

Sure enough, we see that memcached is happy both to set and retrieve values of a variety of classes. This means that even if we create a complex class, we can store it in memcached and retrieve it later.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix