Two Pi R
After the brick directory is available on each Raspberry Pi and the
glusterfs-server package has been installed, make sure both Raspberry
Pis are powered on. Then, log in to whatever node you consider the master,
and use the
gluster peer probe command to tell the master to trust the IP
or hostname that you pass it as a member of the cluster. In this case, I
will use the IP of my secondary node, but if you are fancy and have DNS
set up you also could use its hostname instead:
pi@pi1 ~ $ sudo gluster peer probe 192.168.0.122 Probe successful
Now that my pi1 server (192.168.0.121) trusts pi2 (192.168.0.122),
I can create my first volume, which I will call gv0. To do this, I run
gluster volume create command from the master node:
pi@pi1 ~ $ sudo gluster volume create gv0 replica 2 ↪192.168.0.121:/srv/gv0 192.168.0.122:/srv/gv0 Creation of volume gv0 has been successful. Please start the volume to access data.
Let's break this command down a bit. The first part,
create, tells the gluster command I'm going to create a new volume. The
gv0 is the name I want to assign the volume. That name is
what clients will use to refer to the volume later on. After that, the
replica 2 argument configures this volume to use replication instead of
striping data between bricks. In this case, it will make sure any data is
replicated across two bricks. Finally, I define the two individual bricks
I want to use for this volume: the /srv/gv0 directory on 192.168.0.121
and the /srv/gv0 directory on 192.168.0.122.
Now that the volume has been created, I just need to start it:
pi@pi1 ~ $ sudo gluster volume start gv0 Starting volume gv0 has been successful
Once the volume has been started, I can use the
info command on
either node to see its status:
$ sudo gluster volume info Volume Name: gv0 Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 192.168.0.121:/srv/gv0 Brick2: 192.168.0.122:/srv/gv0
Configure the GlusterFS Client
Now that the volume is started, I can mount it as a GlusterFS type filesystem from any client that has GlusterFS support. First though, I will want to mount it from my two Raspberry Pis as I want them to be able to write to the volume themselves. To do this, I will create a new mountpoint on my filesystem on each Raspberry Pi and use the mount command to mount the volume on it:
$ sudo mkdir -p /mnt/gluster1 $ sudo mount -t glusterfs 192.168.0.121:/gv0 /mnt/gluster1 $ df Filesystem 1K-blocks Used Available Use% Mounted on rootfs 1804128 1496464 216016 88% / /dev/root 1804128 1496464 216016 88% / devtmpfs 86184 0 86184 0% /dev tmpfs 18888 216 18672 2% /run tmpfs 5120 0 5120 0% /run/lock tmpfs 37760 0 37760 0% /run/shm /dev/mmcblk0p1 57288 18960 38328 34% /boot 192.168.0.121:/gv0 1804032 1496448 215936 88% /mnt/gluster1
The more pedantic readers among you may be saying to yourselves, "Wait a minute, if I am specifying a specific IP address here, what happens when 192.168.0.121 goes down?" It turns out that this IP address is used only to pull down the complete list of bricks used in the volume, and from that point on, the redundant list of bricks is what will be used when accessing the volume.
Once you mount the filesystem, play around with creating files and then looking into /srv/gv0. You should be able to see (but again, don't touch) files that you've created from /mnt/gluster1 on the /srv/gv0 bricks on both nodes in your cluster:
pi@pi1 ~ $ sudo touch /mnt/gluster1/test1 pi@pi1 ~ $ ls /mnt/gluster1/test1 /mnt/gluster1/test1 pi@pi1 ~ $ ls /srv/gv0 test1 pi@pi2 ~ $ ls /srv/gv0 test1
After you are satisfied that you can mount the volume, make it permanent by adding an entry like the following to the /etc/fstab file on your Raspberry Pis:
192.168.0.121:/gv0 /mnt/gluster1 glusterfs defaults,_netdev 0 0
Note that if you also want to access this GlusterFS volume from other clients on your network, just install the GlusterFS client package for your distribution (for Debian-based distributions, it's called glusterfs-client), and then create a mountpoint and perform the same mount command as I listed above.
Now that I have a redundant filesystem in place, let's test it. Since I want to make sure that I could take down either of the two nodes and still have access to the files, I configured a separate client to mount this GlusterFS volume. Then I created a simple script called glustertest inside the volume:
#!/bin/bash while [ 1 ] do date > /mnt/gluster1/test1 cat /mnt/gluster1/test1 sleep 1 done
This script runs in an infinite loop and just copies the current date into a file inside the GlusterFS volume and then cats it back to the screen. Once I make the file executable and run it, I should see a new date pop up about every second:
# chmod a+x /mnt/gluster1/glustertest root@moses:~# /mnt/gluster1/glustertest Sat Mar 9 13:19:02 PST 2013 Sat Mar 9 13:19:04 PST 2013 Sat Mar 9 13:19:05 PST 2013 Sat Mar 9 13:19:06 PST 2013 Sat Mar 9 13:19:07 PST 2013 Sat Mar 9 13:19:08 PST 2013
I noticed every now and then that the output would skip a second, but in this case, I think it was just a function of the date command not being executed exactly one second apart every time, so every now and then that extra sub-second it would take to run a loop would add up.
After I started the script, I then logged in to the first Raspberry Pi
sudo reboot to reboot it. The script kept on
running just fine, and if there were any hiccups along the way, I couldn't tell it apart from
the occasional skipping of a second that I saw beforehand. Once the first
Raspberry Pi came back up, I repeated the reboot on the second one, just
to confirm that I could lose either node and still be fine. This kind
of redundancy is not bad considering this took only a couple commands.
There you have it. Now you have the foundation set with a redundant file store across two Raspberry Pis. In my next column, I will build on top of the foundation by adding a new redundant service that takes advantage of the shared storage.
Kyle Rankin is a VP of engineering operations at Final, Inc., the author of a number of books including DevOps Troubleshooting and The Official Ubuntu Server Book, and is a columnist for Linux Journal. Follow him @kylerankin.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide