FS-Cache and FUSE for Media Playback QoS

 in
Use FS-Cache to remove fluctuating performance issues from media playback.

The FS-Cache Project works with network filesystems like NFS to maintain a local on-disk cache of network files. The project is split into a kernel module (fscache) and a dæmon (cachefilesd), which help to maintain the disk cache. The local on-disk cache is maintained under a directory on a local filesystem. For example, the /var/fscache directory on the ext3 filesystem /var. The filesystem containing the fscache directory must have the ability to use Extended Attributes (EAs). Such filesystems are quite common and include ext3 and xfs.

Early Fedora Core 6 kernel RPMs contained the fscache kernel module. Unfortunately, around version 2.6.18-1.2868.fc6 of the updated kernels, the module was no longer included. Fedora 7 kernels do not include the kernel module. Hopefully in the future, this module will be available again in standard Fedora kernels. The Fedora Core 6 update kernel 2.6.20-1.2948.fc6 has an FS-Cache patch included, but it does not include the kernel module.

Patches are available for the Linux kernel for the FS-Cache kernel module (see Resources).

The cachefilesd dæmon communicates with the kernel module using either a file in /proc (/proc/fs/cachefiles) or a device file (/dev/cachefiles). Version 0.7 and earlier versions of cachefilesd could communicate only via the proc file; Version 0.8 also can use the device file if it is available with fallback to the proc file.

Setting Up cachefilesd

For Fedora Core 6 and Fedora 7, there is a cachefilesd RPM. Installation without package management should be fairly easy also, as the dæmon mainly consists of a single executable and a configuration file (/etc/cachefilesd.conf).

The two main things that need to be set up in the configuration file are the path of the directory to use under which to store the filesystem cache and options for controlling how much space is acceptable to use on the filesystem containing the cache directory. You also can supply a tag for the cache if you want to have multiple local disk caches operating at the same time.

The space constraints all have acceptable defaults, so the cache directory is the only configuration option you need to pay attention to. Make sure that this directory is acceptable for storing caches and that it exists prior to trying to start cachefilesd. For a media PC, using a directory on a Flash memory card or on a RAM disk is a good option.

Because the cache directory must have extended attributes, and your tmpfs might not include support for them, you may have to create an ext3 filesystem in a single file inside your tmpfs filesystem and then use the embedded ext3 filesystem for the cachefilesd path. The ext3 filesystem inside the single file happily will support extended attributes. Because the whole ext3 filesystem is in a single file on a RAM disk, it will not cause distracting disk IO on the media PC.

The fstab entry in Listing 1 sets up both a 64MB of RAM filesystem and the mountpoint for the embedded ext3 filesystem. The commands shown in Listing 2 set up the embedded ext3 filesystem. As the cache.ext3fs filesystem exists only in RAM, you have to add these commands to your /etc/rc.local or a suitable boot-time script to set up the cache directory after a reboot. This script has to be called before cachefilesd is started. Leaving cachefilesd out of your standard init run-level startups and starting it manually from the rc.local just after you set up the cache.ext3fs embedded filesystem is a good solution.

If the cache directory is on a persistent filesystem, such as /var, set cachefilesd to start automatically, as shown in Listing 3.

The space constraints in the configuration file are used to set the percentage of available blocks and files on the filesystem containing the local cache directory that should be used. For each of these two resource types, there are three thresholds: cull-off, cull-start and cache-off. When the cull-off limit is reached, no culling of the disk cache is performed, and when the cull-start limit is reached, culling of the disk cache begins. For example, for the disk block type constraint, setting cull-off at 20% and cull-start at 10% means that as long as the disk has more than 20% free blocks, nothing from the cache will be culled. Once the disk reaches 10% free blocks, cache culling begins to free up some space. If the disk manages to get to the cache-off limit (say, 5%), the cache will be disabled until there is more cache-off space available again.

The configuration options are prefixed with b for block type constraint and f for the files-available constraint. The configuration file has a slightly different naming method from that used above. For block constraints, the cull-off limit is called brun. For cull-start, the limit is called bcull, and cache-off is called bstop.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix