Two Pi R
After the brick directory is available on each Raspberry Pi and the
glusterfs-server package has been installed, make sure both Raspberry
Pis are powered on. Then, log in to whatever node you consider the master,
and use the
gluster peer probe command to tell the master to trust the IP
or hostname that you pass it as a member of the cluster. In this case, I
will use the IP of my secondary node, but if you are fancy and have DNS
set up you also could use its hostname instead:
pi@pi1 ~ $ sudo gluster peer probe 192.168.0.122 Probe successful
Now that my pi1 server (192.168.0.121) trusts pi2 (192.168.0.122),
I can create my first volume, which I will call gv0. To do this, I run
gluster volume create command from the master node:
pi@pi1 ~ $ sudo gluster volume create gv0 replica 2 ↪192.168.0.121:/srv/gv0 192.168.0.122:/srv/gv0 Creation of volume gv0 has been successful. Please start the volume to access data.
Let's break this command down a bit. The first part,
create, tells the gluster command I'm going to create a new volume. The
gv0 is the name I want to assign the volume. That name is
what clients will use to refer to the volume later on. After that, the
replica 2 argument configures this volume to use replication instead of
striping data between bricks. In this case, it will make sure any data is
replicated across two bricks. Finally, I define the two individual bricks
I want to use for this volume: the /srv/gv0 directory on 192.168.0.121
and the /srv/gv0 directory on 192.168.0.122.
Now that the volume has been created, I just need to start it:
pi@pi1 ~ $ sudo gluster volume start gv0 Starting volume gv0 has been successful
Once the volume has been started, I can use the
info command on
either node to see its status:
$ sudo gluster volume info Volume Name: gv0 Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 192.168.0.121:/srv/gv0 Brick2: 192.168.0.122:/srv/gv0
Configure the GlusterFS Client
Now that the volume is started, I can mount it as a GlusterFS type filesystem from any client that has GlusterFS support. First though, I will want to mount it from my two Raspberry Pis as I want them to be able to write to the volume themselves. To do this, I will create a new mountpoint on my filesystem on each Raspberry Pi and use the mount command to mount the volume on it:
$ sudo mkdir -p /mnt/gluster1 $ sudo mount -t glusterfs 192.168.0.121:/gv0 /mnt/gluster1 $ df Filesystem 1K-blocks Used Available Use% Mounted on rootfs 1804128 1496464 216016 88% / /dev/root 1804128 1496464 216016 88% / devtmpfs 86184 0 86184 0% /dev tmpfs 18888 216 18672 2% /run tmpfs 5120 0 5120 0% /run/lock tmpfs 37760 0 37760 0% /run/shm /dev/mmcblk0p1 57288 18960 38328 34% /boot 192.168.0.121:/gv0 1804032 1496448 215936 88% /mnt/gluster1
The more pedantic readers among you may be saying to yourselves, "Wait a minute, if I am specifying a specific IP address here, what happens when 192.168.0.121 goes down?" It turns out that this IP address is used only to pull down the complete list of bricks used in the volume, and from that point on, the redundant list of bricks is what will be used when accessing the volume.
Once you mount the filesystem, play around with creating files and then looking into /srv/gv0. You should be able to see (but again, don't touch) files that you've created from /mnt/gluster1 on the /srv/gv0 bricks on both nodes in your cluster:
pi@pi1 ~ $ sudo touch /mnt/gluster1/test1 pi@pi1 ~ $ ls /mnt/gluster1/test1 /mnt/gluster1/test1 pi@pi1 ~ $ ls /srv/gv0 test1 pi@pi2 ~ $ ls /srv/gv0 test1
After you are satisfied that you can mount the volume, make it permanent by adding an entry like the following to the /etc/fstab file on your Raspberry Pis:
192.168.0.121:/gv0 /mnt/gluster1 glusterfs defaults,_netdev 0 0
Note that if you also want to access this GlusterFS volume from other clients on your network, just install the GlusterFS client package for your distribution (for Debian-based distributions, it's called glusterfs-client), and then create a mountpoint and perform the same mount command as I listed above.
Now that I have a redundant filesystem in place, let's test it. Since I want to make sure that I could take down either of the two nodes and still have access to the files, I configured a separate client to mount this GlusterFS volume. Then I created a simple script called glustertest inside the volume:
#!/bin/bash while [ 1 ] do date > /mnt/gluster1/test1 cat /mnt/gluster1/test1 sleep 1 done
This script runs in an infinite loop and just copies the current date into a file inside the GlusterFS volume and then cats it back to the screen. Once I make the file executable and run it, I should see a new date pop up about every second:
# chmod a+x /mnt/gluster1/glustertest root@moses:~# /mnt/gluster1/glustertest Sat Mar 9 13:19:02 PST 2013 Sat Mar 9 13:19:04 PST 2013 Sat Mar 9 13:19:05 PST 2013 Sat Mar 9 13:19:06 PST 2013 Sat Mar 9 13:19:07 PST 2013 Sat Mar 9 13:19:08 PST 2013
I noticed every now and then that the output would skip a second, but in this case, I think it was just a function of the date command not being executed exactly one second apart every time, so every now and then that extra sub-second it would take to run a loop would add up.
After I started the script, I then logged in to the first Raspberry Pi
sudo reboot to reboot it. The script kept on
running just fine, and if there were any hiccups along the way, I couldn't tell it apart from
the occasional skipping of a second that I saw beforehand. Once the first
Raspberry Pi came back up, I repeated the reboot on the second one, just
to confirm that I could lose either node and still be fine. This kind
of redundancy is not bad considering this took only a couple commands.
There you have it. Now you have the foundation set with a redundant file store across two Raspberry Pis. In my next column, I will build on top of the foundation by adding a new redundant service that takes advantage of the shared storage.
Kyle Rankin is a VP of engineering operations at Final, Inc., the author of a number of books including DevOps Troubleshooting and The Official Ubuntu Server Book, and is a columnist for Linux Journal. Follow him @kylerankin.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Sony Settles in Linux Battle
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Libarchive Security Flaw Discovered
- Profiles and RC Files
- Maru OS Brings Debian to Your Phone
- Understanding Ceph and Its Place in the Market
- Snappy Moves to New Platforms
- The Giant Zero, Part 0.x
- Git 2.9 Released
- Astronomy for KDE
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide