FS-Cache and FUSE for Media Playback QoS
To turn on FS-Cache for a mountpoint, you have to pass it the fsc mount option. I noticed that I had to enable FS-Cache for all mountpoints for a given NFS server, or FS-Cache would not maintain its cache. This should not be much of an issue for a machine being used as a media PC, because you likely will not mind having all NFS mounts from the file server being cached.
The fstab entry shown in Listing 4 includes the fsc option. Adding this fsc option to all mountpoint references to fileserver:/... will enable FS-Cache.
Listing 4. fstab Entry for Mounting an NFS Directory on the Fileserver with FS-Cache
fileserver:/foo /foo nfs bg,intr,soft,fsc 0 0
At this stage, FS-Cache will store a local cache copy of files, or part thereof, which is read off the file server. What we really want is to have data from files we are viewing on the media PC to be read ahead into the local disk cache.
To get information into the local disk cache, we can use a FUSE module as a shim between the NFS mountpoint and the application viewing the media. With FUSE, you can write a filesystem as an application in the user address space and access it through the Linux kernel just like any other filesystem. To keep things simple, I refer to the application that provides a FUSE filesystem simply as a FUSE module.
The FUSE filesystem should take the path to the NFS filesystem we want to cache (the delegate) and a mountpoint where the FUSE filesystem is exposed by the kernel. For example, if we have a /HomeMovies NFS mount where we store all our digital home movies, the FUSE module might be mounted on /CacheHomeMovies and will take the path /HomeMovies as the delegate path.
When /CacheHomeMovies is read, the FUSE module will read the delegate (/HomeMovies) and show the same directory contents. When the file /CacheHomeMovies/venice-2001.dv is read, the FUSE module reads the information from /HomeMovies/venice-2001.dv and returns that. Effectively, /CacheHomeMovies will appear just the same as /HomeMovies to an application.
At this stage, we have not gained anything over using /HomeMovies directly. However, in the read(2) implementation of the FUSE module, we could just as easily ask the delegate (/HomeMovies) to read in what the application requested and the next 4MB of data. The FUSE module could just throw away that extra information. The mere act of the FUSE module reading the 4MB of data will trigger FS-Cache to read it over the network and store it in the local disk cache.
The main advantage of using FUSE is to allow caching to work properly when the video playback is sought. The main disadvantage is the extra local copying where the FUSE module asks for more information than is returned to the video player. This can be mitigated by having the FUSE module request only the extra information every now and then—for example, reading ahead only when 2MB of data has been consumed by the application.
For optimal performance, the read-ahead should happen either in a separate thread of control in the FUSE module and use readahead(2) or asynchronous IO, so that the video playback application is not blocked waiting for a large read-ahead request to complete.
The fuselagefs package is a C++ wrapper for FUSE. It includes the Delegatefs superclass, which provides support for implementing FUSE modules that take a delegate filesystem and add some additional functionality. The Delegatefs is a perfect starting point for writing simple shim filesystems like the above nfs-readahead FUSE module.
The read-ahead algorithm is designed to read 8MB using asynchronous IO, and when the first 4MB of that is shown to the application using the FUSE filesystem, it then reads another 8MB using asynchronous IO. So there should be, at worst, 4MB of cached data always available to the FUSE module.
The C++ class to implement the shim is about 70 lines of code, as shown in Listing 5. Two offsets are declared to keep track of what the file offset was in the previous call to fs_read() and at what offset we should launch another asynchronous read-ahead call. The aio_buffer_sz is declared constant as an enum so it can use it to declare the size of aio_buffer. When aio_consume_window bytes of the aio_buffer are shown to the application using the FUSE filesystem, another read-ahead is performed. If debug_readahread_aio is true, the FUSE module explicitly waits for the asynchronous read-ahead to finish before returning. This is handy when debugging to ensure that the return value of the asynchronous IO is valid. A non-illustrative example would have some callback report if an asynchronous IO operation has failed.
The main job of schedule_readahread_aio() is possibly to execute a single asynchronous read-ahead call. It updates m_startNextAIOOffset to tell itself when the next asynchronous read-ahead call should be made. The forceNewReadAHead parameter allows the caller to force a new asynchronous read-ahead for cases such as when a seek has been performed.
The fs_read() method is a virtual method from Delegatefs. It has similar semantics to the pread(2) system call. Data should be read into a buffer of a given size at a nominated offset. The fs_read() method is called by FUSE indirectly. The main logic of our fs_read() is to check whether the given offset is in a logical sequence from the last read call. If the offset is not sequential from the last byte returned from the previous read(), fs_read() will force schedule_readahread_aio() to perform another read ahead. schedule_readahread_aio() is always called from fs_read() so it can handle the sliding asynchronous read-ahead window.
As Delegatefs knows how to read bytes from the Delegate filesystem, we then can simply return by calling up to the base class. The remainder of nfs-fuse-readahead-shim.cpp is taken up by parsing command-line options, and instead of returning from main(), it calls the main method of a Delegatefs through an instance of the CustomFilesystem class. The shim is compiled with the Makefile shown in Listing 6.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide