Two Pi R

After the brick directory is available on each Raspberry Pi and the glusterfs-server package has been installed, make sure both Raspberry Pis are powered on. Then, log in to whatever node you consider the master, and use the gluster peer probe command to tell the master to trust the IP or hostname that you pass it as a member of the cluster. In this case, I will use the IP of my secondary node, but if you are fancy and have DNS set up you also could use its hostname instead:

pi@pi1 ~ $ sudo gluster peer probe
Probe successful

Now that my pi1 server ( trusts pi2 (, I can create my first volume, which I will call gv0. To do this, I run the gluster volume create command from the master node:

pi@pi1 ~ $ sudo gluster volume create gv0 replica 2 
Creation of volume gv0 has been successful. Please start 
the volume to access data.

Let's break this command down a bit. The first part, gluster volume create, tells the gluster command I'm going to create a new volume. The next argument, gv0 is the name I want to assign the volume. That name is what clients will use to refer to the volume later on. After that, the replica 2 argument configures this volume to use replication instead of striping data between bricks. In this case, it will make sure any data is replicated across two bricks. Finally, I define the two individual bricks I want to use for this volume: the /srv/gv0 directory on and the /srv/gv0 directory on

Now that the volume has been created, I just need to start it:

pi@pi1 ~ $ sudo gluster volume start gv0
Starting volume gv0 has been successful

Once the volume has been started, I can use the volume info command on either node to see its status:

$ sudo gluster volume info

Volume Name: gv0
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp

Configure the GlusterFS Client

Now that the volume is started, I can mount it as a GlusterFS type filesystem from any client that has GlusterFS support. First though, I will want to mount it from my two Raspberry Pis as I want them to be able to write to the volume themselves. To do this, I will create a new mountpoint on my filesystem on each Raspberry Pi and use the mount command to mount the volume on it:

$ sudo mkdir -p /mnt/gluster1
$ sudo mount -t glusterfs /mnt/gluster1
$ df
Filesystem         1K-blocks    Used Available Use% Mounted on
rootfs               1804128 1496464    216016  88% /
/dev/root            1804128 1496464    216016  88% /
devtmpfs               86184       0     86184   0% /dev
tmpfs                  18888     216     18672   2% /run
tmpfs                   5120       0      5120   0% /run/lock
tmpfs                  37760       0     37760   0% /run/shm
/dev/mmcblk0p1         57288   18960     38328  34% /boot   1804032 1496448    215936  88% /mnt/gluster1

The more pedantic readers among you may be saying to yourselves, "Wait a minute, if I am specifying a specific IP address here, what happens when goes down?" It turns out that this IP address is used only to pull down the complete list of bricks used in the volume, and from that point on, the redundant list of bricks is what will be used when accessing the volume.

Once you mount the filesystem, play around with creating files and then looking into /srv/gv0. You should be able to see (but again, don't touch) files that you've created from /mnt/gluster1 on the /srv/gv0 bricks on both nodes in your cluster:

pi@pi1 ~ $ sudo touch /mnt/gluster1/test1
pi@pi1 ~ $ ls /mnt/gluster1/test1
pi@pi1 ~ $ ls /srv/gv0
pi@pi2 ~ $ ls /srv/gv0

After you are satisfied that you can mount the volume, make it permanent by adding an entry like the following to the /etc/fstab file on your Raspberry Pis:  /mnt/gluster1  glusterfs  defaults,_netdev  0  0

Note that if you also want to access this GlusterFS volume from other clients on your network, just install the GlusterFS client package for your distribution (for Debian-based distributions, it's called glusterfs-client), and then create a mountpoint and perform the same mount command as I listed above.

Test Redundancy

Now that I have a redundant filesystem in place, let's test it. Since I want to make sure that I could take down either of the two nodes and still have access to the files, I configured a separate client to mount this GlusterFS volume. Then I created a simple script called glustertest inside the volume:


while [ 1 ]
  date > /mnt/gluster1/test1
  cat /mnt/gluster1/test1
  sleep 1

This script runs in an infinite loop and just copies the current date into a file inside the GlusterFS volume and then cats it back to the screen. Once I make the file executable and run it, I should see a new date pop up about every second:

# chmod a+x /mnt/gluster1/glustertest
root@moses:~# /mnt/gluster1/glustertest
Sat Mar  9 13:19:02 PST 2013
Sat Mar  9 13:19:04 PST 2013
Sat Mar  9 13:19:05 PST 2013
Sat Mar  9 13:19:06 PST 2013
Sat Mar  9 13:19:07 PST 2013
Sat Mar  9 13:19:08 PST 2013

I noticed every now and then that the output would skip a second, but in this case, I think it was just a function of the date command not being executed exactly one second apart every time, so every now and then that extra sub-second it would take to run a loop would add up.

After I started the script, I then logged in to the first Raspberry Pi and typed sudo reboot to reboot it. The script kept on running just fine, and if there were any hiccups along the way, I couldn't tell it apart from the occasional skipping of a second that I saw beforehand. Once the first Raspberry Pi came back up, I repeated the reboot on the second one, just to confirm that I could lose either node and still be fine. This kind of redundancy is not bad considering this took only a couple commands.

There you have it. Now you have the foundation set with a redundant file store across two Raspberry Pis. In my next column, I will build on top of the foundation by adding a new redundant service that takes advantage of the shared storage.


Kyle Rankin is VP of engineering operations at Final, Inc., the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Thank you for teaching this

Cellular Specs's picture

Thank you for teaching this configuration
I just started to learn about the Raspberry Pi
Cellular Specs

It is Canada’s largest online

sollen's picture

It is Canada’s largest online auto parts store and offers hundreds of thousands of vehicle applications in everything from brake parts 27W Forklift driving lamps LED to transfer cases. Parts are purchased over the company’s secure website and delivered to your home or business within one to three business days.

It is Canada’s largest online

sollen's picture

It is Canada’s largest online auto parts store and offers hundreds of thousands of vehicle applications in everything from brake parts to transfer 27W Forklift driving lamps LED cases. Parts are purchased over the company’s secure website and delivered to your home or business within one to three business days.

Services on GlusterFS

Tha-Fox's picture

I believe it's not possible to put your root to GlusterFS but I'm planning to install ownCloud on GlusterFS. Have you any idea how fast connection it need for replication to work? I would have one server at the office and another one at home and the connection is 60 / 10 Mbit/s between them.

Geo-Replication Feature in GlusterFS - Asynchronous Replication

JMW's picture

The geo-replication feature was created specifically for those cases where you have a WAN link or some other unreliable network connection. It's eventually consistent, as opposed to the default replication that's synchronous and strongly consistent.

Here's the required command:

Unfortunately, LJ comments bot doesn't allow me to post links here :( but if you google for "geo-replication glusterfs" you'll find some good recipes.

GlusterFS will not work good

Anonymous's picture

GlusterFS will not work good over WAN.

Reply to comment | Linux Journal

moving company Texas's picture

Wonderful items from you, man. I have understand your stuff previous to and you are simply extremely magnificent.
I actually like what you've obtained here, really like
what you are saying and the way during which you are
saying it. You're making it enjoyable and
you continue to take care of to keep it wise. I cant wait to read far more from you.
This is really a tremendous website.