OpenFiler: an Open-Source Network Storage Appliance
I've set up quite a few file servers using Linux in my day, and although it's not particularly difficult, I've often thought that there should be a better way to do it. The folks at the OpenFiler Project definitely have built a better mousetrap. The OpenFiler team seems to be inspired by the NetApp filer family of Network Storage Appliances and has come out with an open-source clone that lets you take any x86 computer and give it nearly all the functionality of a NetApp filer.
The OpenFiler distribution is an easy-to-install, easy-to-use, nearly turnkey solution. At the time of this writing, the current version is 2.3, and it's based on rPath, so it's focused and lean where it needs to be, allowing the developers to pack it with features useful to its main purpose. It's even lean enough to run on some embedded systems. The feature list is comprehensive, and it compares very well with commercial appliances like those offered by Snap and others. Here are some of OpenFiler's killer features:
Full iSCSI target and initiator support.
Support for Fiber Channel devices (depending on hardware).
Support for software (md) RAID or hardware RAID.
On-line volume/filesystem expansion.
Synchronous/asynchronous replication of data.
NFS, SMB/CIFS, HTTP/WebDAV and FTP.
Supports SMB/CIFS shadow copy for snapshot volumes.
Supports NIS, LDAP and Windows NT/Active Directory authentication.
Flexible quota management.
Easy-to-use Web-based admin GUI.
The only real downside to OpenFiler is that you have to pay for the Administration Guide. The Installation Guide and a downrev version of the Admin Guide are both on-line and available for free, but the current revision of the Admin Guide is available only for paying customers, as this is how the OpenFiler Project is funded. Luckily, OpenFiler is easy to configure, thanks to its GUI, so that isn't a huge detriment.
If you are familiar with installing a Red Hat-based Linux distribution, installing OpenFiler will be old hat to you. The system requirements are fairly low. I've installed OpenFiler on an embedded PC with a 500MHz CPU, 512MB of RAM and a 2GB CompactFlash in this case, but it'll install on regular desktops and servers as well. Booting off the CD lands you into a graphical installer (unless you use the text argument when booting the system). Note that you must select manual partitioning when setting up the operating system disk in your machine; otherwise, you won't be able to set up data storage disks in the OpenFiler Admin GUI later. Aside from that, it's a fairly standard Red Hat-ish installation. Once the installation is complete, the next step is to configure your OpenFiler instance by pointing a Web browser to https://IP_OF_OPENFILER:446.
You now should have the OpenFiler management GUI open in your Web browser, as shown in Figure 1. As per the Installation Guide, log in with user name “openfiler” and password “password”. After you log in, you'll be in the admin interface, at the main status screen. From here, you can configure just about every aspect of your OpenFiler.
The status screen can show you vital system information at a glance. It's especially handy that the admin interface displays the uptime and load average of the machine in the title bar of the console. Not shown in the screenshot are the memory and storage graphs, similar to a graphical top.
The system screen is where you can set up and adjust the overall system parameters, like the IP address of the machine or its high-availability/replication partner. It even embeds a Java-based SSH client in the console, so you can get a shell on the machine if you need to, although any SSH client works as well. Note: it's important to define the hosts or networks that your OpenFiler will serve here. If you don't do that, your OpenFiler will refuse to serve files via NFS or SMB/CIFS. It's not difficult to add—I simply dropped a statement to cover my 192.168.1.0/24 in there—but OpenFiler stubbornly refused to talk to any machines until that was added. Another thing to note here is that OpenFiler supports the creation of bonded Ethernet interfaces, so if you're building a mission-critical file server, you can put two network cards in the server, connect each card to a different network switch, and then you have fault tolerance at the network level.
The volume manager is where you can add disks to your OpenFiler, create filesystems and manage software RAIDs. OpenFiler uses the Linux Logical Volume Manager (LVM) as its volume manager, and it supports both ext3 and XFS filesystems for storage that's locally attached to the OpenFiler host. In this case, because I'm using an embedded PC, I had to attach a 320GB disk via USB to OpenFiler. It wasn't a problem—OpenFiler happily allowed me to create a volume group using that USB disk, and then I could create a volume within that group and start laying out the filesystem.
The next tab in the admin interface is the quota tab. The quota screen lets you set quotas per group, user or guest, and have a different quota for each volume. For example, if your OpenFiler was in a business environment, you could set everyone in the Marketing group to have a 2GB quota each, everyone in the Engineering group could have a 10GB quota, and everyone in the IT group could be uncapped—except for the CEO, who's also uncapped. Having flexible quota options allows you to tailor the OpenFiler to the needs of your business.
The share manager is where you make subdirectories within a volume, and then share out those subdirectories. This is where you'll spend a lot of time, setting up the directories, shares and access permissions. A nice feature of OpenFiler is that you can specify which network service shares out a specific directory. For example, I can set up a Sales share that is SMB/CIFS only (all the Sales folks run Windows), an Engineering share that is NFS only (all the Engineers run Linux) and a Sandbox share that is serviced by both SMB/CIFS and NFS. I then can use the same screen to lock down the permissions on the respective shares, so that only the members of those groups can read or write to those shares, while the Sandbox is wide open.
I discovered an interesting bit of trivia while researching this article. If you want to share directories via NFS to an Apple Mac, so the directory can be mounted in the Finder, you must specify that the share's origin port be above 1024 (this is otherwise known as an insecure NFS option). The Mac won't talk to NFS servers running on privileged ports. (And yes, I have a Mac. I view it as a flashier but less knowledgeable cousin to my Ubuntu machines.)
The next tab over is the services manager, where you can enable or disable the network services provided by OpenFiler. If you plan on using your OpenFiler only as an NFS server, you can turn off the SMB/CIFS services completely and save some memory on your server. This screen also is where you can specify options, such as of which workgroup the SMB/CIFS server is a member or whether there is a UPS attached to the OpenFiler, so it can auto-shutdown in the event of a power failure. OpenFiler also can act as an LDAP server, and you can back up or restore LDAP directories via this screen.
The last tab in the admin console is the accounts manager, which is where you define what authentication methods you'd like OpenFiler to use. You can run an internal LDAP server on the OpenFiler itself, and create the users and groups locally. You also can point the OpenFiler to your corporate LDAP if you have one. If you're in a Windows environment, you can set up OpenFiler to use your corporate Active Directory for authentication or even an old-school NT4-style domain.
Bill Childers is the Virtual Editor for Linux Journal. No one really knows what that means.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
- BASH script to log IPs on public web server
1 hour 41 min ago
5 hours 16 min ago
- Reply to comment | Linux Journal
5 hours 49 min ago
- All the articles you talked
8 hours 12 min ago
- All the articles you talked
8 hours 16 min ago
- All the articles you talked
8 hours 17 min ago
12 hours 42 min ago
- Keeping track of IP address
14 hours 33 min ago
- Roll your own dynamic dns
19 hours 46 min ago
- Please correct the URL for Salt Stack's web site
22 hours 57 min ago