Raspberry Pi: the Perfect Home Server
The first order of business is to get the external storage device mounted.
dmesg to look for where the storage device was
almost certainly will be /dev/sda. I like using automounter to handle
mounting removable storage devices, as it is more flexible about handing
devices that may not be present or ready at boot time:
>> sudo apt-get install autofs >> sudo nano -w /etc/auto.master ======/etc/auto.master====== ... /misc /etc/auto.misc ... ======/etc/auto.master====== >> sudo nano -w /etc/auto.misc
Note, my external storage device is formatted with ext4—modify this for your needs if required:
======/etc/auto.misc====== ... storage -fstype=ext4 :/dev/sda1 ... ======/etc/auto.misc====== >> sudo /etc/init.d/autofs restart >> ls -lat /misc/storage
Optionally, create a symlink to shorten the path a smidgen:
>> ln -s /misc/storage /storage
At the top of any home server feature list is providing rock-solid backups. With the RPi, this is pretty simple, due to the wide range of network-sharing options in Linux: Samba/CIFS for Windows machines, NFS for UNIX-based devices and even SFTP for more advanced backup clients like deja-dup. Because the RPi has only 100Mb Ethernet, and the storage device is on USB, it's not going to have super-fast transfer speeds. On the other hand, good backup clients run automatically and in the background, so it's unlikely that you'll notice the slightly slower transfer speeds.
My home network includes one Windows 7 machine. For it, I exported a backup directory on the RPi's external USB storage device via Samba. Because the backup utility in the basic version of Windows 7 doesn't support network drives as a backup destination, I used SyncBack Free to set up automated, daily backups.
Configuring Samba is simple.
1) Install the samba and common-bin library (which has the smbpasswd utility):
>> sudo apt-get install samba samba-common-bin
smbpasswd to let your local ID have access:
>> sudo smbpasswd -a YOURUSERIDHERE
3) Edit the samba configuration file:
>> sudo nano -w /etc/samba/smb.conf
4) Change the
workgroup = WORKGROUP line to match your Windows
5) Comment out or delete the [homes] and [printers] share. (Printer sharing will be done later via direct CUPS access.)
6) Add an entry for the Windows backup paths. Here's my example, which I placed at the bottom of the file:
======/etc/samba/smb.conf====== ... [win7pc] comment=Backup for windows PC path=/storage/win7pc writeable=Yes create mask=0777 directory mask=0777 browsable=Yes public=Yes valid users=YOURUSERIDHERE ... ======/etc/samba/smb.conf======
7) Restart Samba to implement your edits:
>> sudo /etc/init.d/samba restart
8) Test connectivity from the Windows machine by mapping a network drive from the file explorer.
For Linux devices, deja-dup is brilliantly simple to set up and use. It's been installed by default on both my Fedora 18 and Ubuntu 12.10 installs. While the package name is "deja-dup", the front end is simply called "Backup". Although the RPi easily could support NFS export, I've found that using deja-dup's SSH option is easier and more portable, and it eliminates the need for an additional service on the RPi. Specifying a deja-dup encryption password is probably a good idea, unless you like the idea of all your files walking off if someone pockets the storage drive:
>> sudo mkdir /storage/linuxlaptop >> sudo chown -R YOURUSERIDHERE:YOURUSERIDHERE /storage/linuxlaptop
From the client Linux machine, launch the backup utility, choose "SSH" as the backup location, and enter the RPi's IP address and the storage location you just created. The first backup will be slow, but future runs will be sending only incremental changes, which is significantly faster.
Figure 2. Deja-dup Client Setup
Multimedia Server: DLNA
Now that everyone's files are backed up safely, let's move on to some fun! A DLNA server will give you a central place to store your movies, music and pictures. From this central repository, DLNA clients from every screen in the house can play back this content with ease.
At least, that's the promise. The reality is that the DNLA specs don't quite nail down many important things like which formats or encodings are supported. Each client typically has a slightly different idea of what formats and server features it would like to support. A much higher-power server might be able to transcode local content to device-supported formats on the fly, but that's not possible on the RPi, and the on-the-fly transcoding often messes up other features like pause, fast-forward and rewind. In general, higher-powered devices like the PS3, Xbox and WD TV devices can handle most formats without any transcoding. Lower-end devices like smart TVs or Blu-ray players support a much more limited list of codecs.
For the RPi, your best bet is simply to encode to the standards your primary DLNA device supports and then test your other DLNA clients. If they won't play nicely, the tips in the next section may help. In my case, my PlayStation 3 acts as the DLNA client, which plays nicely with the compact .m4v files generated by Handbrake.
Minidlna is a great choice for the RPi DLNA server. It's already in the Raspbian distribution, is quite simple to set up and uses minimal server resources while running:
>> sudo apt-get install minidlna >> sudo nano -w /etc/minidlna.conf
Here are the relevant sections of my /etc/minidlna.conf:
... # I found keeping video + audio in different paths helpful media_dir=V,/storage/dlna/video media_dir=A,/storage/dlna/music ... presentation_url=http://192.168.1.10:8200/ ... friendly_name=MyRPi ... # Since I add new media infrequently, turning off # inotify keeps minidlna for polling for # content changes. It's simple enough to run # sudo /etc/init.d/minidlna force-reload # when new content is added. inotify=no
Once done editing, tell minidlna to restart and rescan for content:
>> sudo /etc/init.d/minidlna force-reload
Minidlna has the ability to provide movie-poster thumbnails for your movies for devices that support it (like the PS3). It makes finding a specific movie when scrolling through dozens of movie files much more convenient. I've found that the most compatible file layout is to have one directory per movie, containing just the movie file plus the thumbnail image named "Cover.jpg". Using a format like "MovieName.m4v" and "MovieName.jpg" works fine for the PS3, but it breaks VLC (if you can convince the VLC uPNP plugin to find the server in the first place).
From the PS3, you can test connectivity by going to "Video" on the XMB bar. The "friendly_name" you set previously should be visible when scrolling down in the Video section. If you cant find it, test to ensure that Minidlna is up by going to http://192.168.1.10:8200/ with a Web browser.
Multimedia for Non-DLNA Devices
Once you get DNLA working with some of your devices, you may find devices it doesn't want to work with, so a multimedia plan B is a good idea. The nginx Web server has an MP4 plugin that tries to improve streaming over plain-old HTTP, but browser playback performance varied widely, and fast-forwarding within a movie didn't work consistently either. It seems like the lowest common denominator for multimedia sharing across fussy or non-DLNA devices is a good-old-fashioned Samba share with guest read-only access.
Here's an sample section from /etc/samba/smb.conf:
[dlna] path=/storage/dlna read only=yes browsable=yes public=yes
After defining the share and restarting Samba (
restart), you can start to test out your clients.
I tested the following clients with a mix of videos encoded with Handbrake as m4v files:
Android 4.0.4 phone: "ES File Explorer" with "ES Media Player" (player comes with install).
Android 4.1.2 tablet: "ES File Explorer" with "ES Media Player" (player comes with install).
Linux devices: automount ://192.168.1.10/dlna, then use VLC or MPlayer.
Windows: mount //192.168.1.10:/dlna, then use VLC.
All devices were able to start playing almost instantly and fast-forward with no delays.
The RPi runs CUPS quite well, so it's easy to share an older printer that doesn't have native networking features.
Install CUPS and any packages needed by your printer. I needed hplip-cups since I have an HP inkjet printer:
>> sudo apt-get install cups hplip-cups
Update the "Listen" line and add the
Allow @LOCAL block to
the Location directives as shown below (so you can use other machines
on your LAN to administer CUPS):
======/etc/cups/cupsd.conf====== #Listen localhost:631 #Comment this out Listen 192.168.1.10:631 #Add this line ... <Location /> Order allow,deny Allow @LOCAL </Location> # Restrict access to the admin pages... <Location /admin> Order allow,deny Allow @LOCAL </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> ======/etc/cups/cupsd.conf======
Add your local ID to the lpadmin group so you can administer CUPS:
>> sudo usermod -a -G lpadmin YOURUSERIDHERE
>> sudo /etc/init.d/cups restart
Then, go to http://192.168.1.10:631/ and click "Adding Printers and Classes" to set up your printer. My printer was auto-discovered on the USB, so all I had do to was click "share". Also access https://192.168.1.10:631/admin, and make sure to check "Share printers connected to this system".
Once you're done, you can set up your clients the usual way. My Linux clients auto-discovered the printer and picked the right printer drivers once I entered the hostname. On my Windows 7 machine, once I selected "Network Printer", I had to click "The printer that I want isn't listed", select "Select a shared printer by name" and then enter the URL from the CUPS Web interface: http://192.168.1.10:631/printers/HP_J4500.
With a minimal amount of additional hardware and configuration, the Raspberry Pi can be a highly capable, compact home server. It can bring the wide range of enterprise services offered by Linux into a home environment with minimal hardware expense.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Interview with Patrick Volkerding
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide