Building an Ultra-Low-Power File Server with the Trim-Slice
NFS is the classic Network File System, and it is has been in use for decades on Linux and UNIX. The Popcorn Hour media player connected to my TV supports NFS, and I don't have any Windows computers, so NFS is really the only classic file-serving protocol I need (or want) on my network. NFS has very limited security, so it's not ideal for everyone, but it's lightweight and easy to configure. In my opinion, if you have a device that supports NFS and SMB, go with NFS.
On Ubuntu, the NFS server I use is called nfs-kernel-server, and you can install it with the following:
sudo apt-get install nfs-kernel-server
To create an nfs share, edit the /etc/exports file, and add the directory you want to export. Here is an example:
The above line exports the /mnt/disk01 directory to my Popcorn Hour, with the following flags:
ro— read only: in other words, don't allow anything that could change the filesystem. The Popcorn Hour has the ability to delete items, but I don't want to let my kids delete things arbitrarily or accidentally with the remote.
sync— reply to requests only after the changes have been committed to stable storage.
root_squash— map requests from uid/gid 0 to the anonymous uid/gid. This makes things a little more secure.
no_subtree_check— from the man page: "This option disables subtree checking, which has mild security implications, but can improve reliability in some circumstances." See the man page for more information (
With the line in place, I run the
-ra command to
refresh the exports. Then on the Popcorn Hour, I can mount the exported
directory, and away I go. There are several other options you can use in
the /etc/exports file. See the exports man page for details.
The example entry above can't be mounted on any other host, but to
permit other hosts to do so, I either can change
popcorn to the
IP address and netmask of the network I want to share it with (for
example, 192.168.10.0/24 for every host with an IP address starting with
192.168.10.), or I can add additional host definitions to the end of
With the exports file updated and refreshed, I can mount the export with something like this:
sudo mount -t nfs trimslice:/mnt/trimslice/disk01 ↪/mnt/trimslice/disk01
Or, I could add an entry like the following to my /etc/fstab file:
trimslice:/mnt/trimslice/disk01 /mnt/trimslice/disk01 ↪nfs defaults 0 0
and the NFS share always would be mounted at boot time.
Samba, aka SMB/CIFS, is how you go about sharing files with computers running Windows. If I had a Windows machine or two, using Samba would be a given. I don't, but I'll go ahead and describe the process here. For starters, Samba is installed on the Trim-Slice with the following:
sudo apt-get install samba
After installation, edit the /etc/samba/smb.conf file to set up your shares (add them to the end of the file). A read-only share equivalent to the NFS one described above is:
[disk01] comment = trimslice disk01 path = /mnt/disk01 browsable = yes guest ok = yes read only = yes
Add the above to the end of the smb.conf file, and the share will pop into existence on the network. With Samba, there is no need to restart the service or run a command after editing the smb.conf file; any changes are applied automatically as soon as the file is saved.
It's a good idea to uncomment the
security = user line in the smb.conf
file to add some security (and if you do want security, you should set
guest ok in the above example to
no). And, if you have a proper Windows
network, you should change the
workgroup name in the smb.conf file to
the actual name of your Windows workgroup.
As with NFS, you can enter a lot more settings in the smb.conf file to tweak things just the way you want them. The default file is filled with examples, and the Samba documentation goes into even greater detail.
DAAP, in case you are interested, stands for Digital Audio Access Protocol. An older, but serviceable, standalone DAAP server for Linux is mt-daapd, also known as the Firefly Media Server. Unfortunately, it is not under active development. Some forks are in the works (which aren't in the Ubuntu repositories yet), so maybe the situation will improve in the future. To install it, do the following:
sudo apt-get install mt-daapd
After installing mt-daapd, set the password for the admin account in the /etc/mt-daapd.conf file. Technically, the password already is set, but it's good practice to change it. You can tweak other settings in the file, but the GUI is easier.
Figure 3. mt-daapd, aka the Firefly Media Server
After changing the password, restart mt-daapd with:
sudo /etc/init.d/mt-daapd restart
Then, go to the Web interface to configure it: http://trimslice:3689 (replace "trimslice" in the URL with the correct IP address or name).
The configuration page is simple and self-explanatory. You can set the name, change the admin password and set a password for listening to the music (in case you don't want to share your collection of classic Dr. Who music with everyone on your network). You also set which folder or folders contain your music (multiple folders can be specified). Finally, you can configure how often to have mt-daapd rescan your music folder(s).
Once the changes are to your liking, pressing the Save button saves the settings to the /etc/mt-daapd.conf file. But, the GUI is there so you might as well use it.
All should be well and good at this point. Unfortunately, mt-daapd, as packaged in the repository the Trim-Slice uses, does not support FLAC files. If your collection is mostly MP3 files, that won't be an issue. If it is an issue, your options are to compile your own, live with the limitation or find an alternative.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?