PVFS: A Parallel Virtual File System for Linux Clusters
As certain types of applications perform better on certain file systems due to their access patterns, it is important to us that PVFS be able to coexist with other file systems. The PVFS system had no problem operating in the same environment with JFS, NFS, SFS and even the MOSIX file system. This neat setup served large I/O requests such as mp3 files on the Web. The MOSIX file system was used by MOSIX to migrate processes on the cluster to the most appropriate CPU at the time.
Typically, PVFS sits on top of the ext2 file system. However, the next generation of Linux file systems will be journaling file systems. This protects against hardware or software failures by producing a log of changes-in-progress that records changes to files and directories in a standard way to a separate location, possibly a separate disk. If the primary file system crashes, the journal can be used to undo any partially completed tasks that would leave the file system in an inconsistent state.
The next step in this perspective is to see how well PVFS performs on top of ext3 and GFS as native file systems. This is left for experimentation on the new cluster (see below).
Another important factor in choosing a file system such as PVFS is to check how well it can scale up with more client and I/O nodes. Having one nonredundant management node might seem like an inherent bottleneck. However, the manager is not involved in any read or write operations, as these are handled directly between clients and I/O dæmons. It is only when many files are created, removed, opened or closed in rapid succession that the manager is heavily loaded.
We wish to test the scalability of this configuration, however so the upcoming PVFS installation will be on a cluster consisting of 16 PIII 500 MHz CPUs with 512MB RAM each. Eight of the CPUs have 18GB SCSI disks each with a mix of RAID 1 and RAID 5 setup. The projected installation will have one Manager, seven I/O nodes and 14 clients (I/O nodes are also clients). This cluster will allow us to better understand how PVFS will scale for our applications and will additionally allow us to compare PVFS performance with the performance of alternative file systems, such as NFS, for systems of this size. Tests of PVFS on other clusters have shown it to be scalable to systems of more than 64 nodes for other workloads. (See “PVFS: A Parallel File System for Linux Clusters” at PVFS's web site in Resources.)
PVFS is easy to install and configure. It comes with an installation guide that walks administrators through the installation procedure. It provides high performance for I/O intensive parallel or distributed applications. It is also compatible with existing applications so that you don't have to modify them to run with PVFS. PVFS is well supported by the developers through mailing lists.
PVFS currently contains neither data redundancy nor recovery from node failure. There may also be potential bottlenecks at the manager level as the number of client nodes increases. PVFS endures restrictions introduced by TCP/IP dependence, such as limits on the number of simultaneous open system sockets and network traffic overhead inherent in the TCP/IP protocol. As for security, PVFS provides a rather unsophisticated security model, which is intended for protected cluster networks. Also, for the time being, PVFS is limited to the traditional Linux two gigabyte file size.
Types of applications that benefit the most from PVFS are:
Applications requiring large I/O, such as scientific computation or streaming media processing
Parallel applications because the bandwidth increases as multiple clients access data simultaneously
Types of applications for which PVFS is poorly suited:
Applications requiring many small, low-latency requests, such as static html pages (there is quite a bit of overhead in network traffic for multiple small file requests)
Applications requiring long-term storage or failover ability—PVFS does not provide redundancy on its own
As noted earlier, existing applications can access PVFS through either the kernel module or the library wrapper interface. This does not require any modification from the user's point of view. However, to obtain the best performance for parallel applications, developers must modify their applications to use a more sophisticated interface. There are two options for this approach as well. The first is to use the native PVFS library calls. This interface allows advanced options, such as specifying file striping and number of I/O nodes. It also lets each process define a “partition” or particular view of the file so that contiguous read operations access only specific offsets within the file (see Figures 2 and 3). Documentation for this is available in the PVFS user's guide.
MPI-IO is the preferred option for writing PVFS programs. It layers additional functionality on top of PVFS, including collective file operations and two phase I/O. This interface is documented as part of the MPI-2 standard.
Free DevOps eBooks, Videos, and more!
Regardless of where you are in your DevOps process, Linux Journal can help!
We offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, and advice & help from the expert sources like:
- Linux Journal
- Android Candy: Google Keep
- Readers' Choice Awards 2014
- Handling the workloads of the Future
- How Can We Get Business to Care about Freedom, Openness and Interoperability?
- diff -u: What's New in Kernel Development
- Days Between Dates?
- Synchronize Your Life with ownCloud
- Computing without a Computer
- December 2014 Issue of Linux Journal: Readers' Choice