Xen Virtualization and Linux Clustering, Part 2
Now is a good time to test your PVM configuration to make sure it works correctly on both the master and slaves. Start by setting up the appropriate links on the master to allow the PVM executables to run without specifying their paths:
# ln -s $PVM_ROOT/lib/pvm /usr/bin/pvm # ln -s $PVM_ROOT/lib/aimk /usr/bin/aimk
Next, compile an example PVM program:
# cd $PVM_ROOT/examples # aimk hello hello_other
If they are not booted already, boot each Xen slave using commands similar to the following:
# xm create /etc/xen/debian_slave1.conf # xm create /etc/xen/debian_slave2.conf # xm create /etc/xen/debian_slave3.conf
Once your slaves are booted, start the PVM daemons on the master and slaves by running the command:
# pvm pvm.hosts
This command starts the PVM daemons on all cluster nodes specified in the pvm.hosts file and then leaves you at a PVM console. You can use the conf command to see a list of all hosts that are successfully running a PVM daemon. The quit command exits the PVM console but leaves all of the PVM daemons running, which is what we want. An example of this is shown below:
pvm> conf conf 4 hosts, 1 data format HOST DTID ARCH SPEED DSIG master 40000 LINUX 1000 0x00408841 debian_slave3 c0000 LINUX 1000 0x00408841 debian_slave1 100000 LINUX 1000 0x00408841 debian_slave2 140000 LINUX 1000 0x00408841 pvm>quit
Now that the PVM daemons are running, copy the hello_other executable that we compiled above to the slaves. This same approach also can be used to copy other executables that the slaves will need to execute.
# cd $PVM_ROOT/bin/LINUX # scp hello_other root@debian_slave1:$PVM_ROOT/bin/LINUX/hello_other # scp hello_other root@debian_slave2:$PVM_ROOT/bin/LINUX/hello_other # scp hello_other root@debian_slave3:$PVM_ROOT/bin/LINUX/hello_other
Now run the hello program on the master:
This should produce output similar to the following:
i'm t40009 from tc0003: hello, world from debian_slave3
Congratulations! You now have a working cluster set up on your computer.
Once you're done running PVM programs, you can stop the PVM daemons on the master and slaves by using the halt command from the PVM console:
# pvm pvmd already running. pvm> halt halt Terminated
Now that you have multiple domains created and configured for use as a cluster, we can install and test a useful PVM program. I chose to test the cluster by using an open-source ray tracer. Ray tracing involves tracing rays into a scene to perform lighting calculations in order to produce realistic computer-generated images. Because rays must be traced for each pixel on the screen, ray tracing can be parallelized naturally by calculating the colors of multiple pixels simultaneously on different members of the cluster, thereby reducing the render time (if we were actually using multiple computers).
In this section, I describe the installation and use of a PVM patch for the POV-Ray ray tracer called PVMPOV. PVMPOV divides the rendering process into one master and many slave tasks, distributing the rendering across multiple systems. The master divides the image into small blocks that are assigned to slaves. The slaves return completed blocks to the master, which the master ultimately combines to generate the final image.
Begin by installing PVMPOV 3.1 on Domain-0. Installation instructions can be found in the PVMPOV HOWTO in Chapter 1, "Setting up PVMPOV". If the first wget command in Section 1.1 gives you trouble, try
instead. Also, in Section 1.4, it should not be necessary to run the command aimk newsvga.
After completing these instructions on the master, create a directory for storing .pov files (POV-Ray input files) as well as the generated images. On my system, I created a folder named /etc/xen/benchmark. The .pov files may need access to other POV-Ray include files, so create a link to the appropriate directory, which is located with the PVMPOV source that you compiled above. As an example, I used the following command on my system:
# ln -s /install/povray/pvmpov3_1g_2/povray31/include # /etc/xen/benchmark/include
Once you have completed the PVMPOV installation on the master, you must copy the required binaries, libraries and other files to the slaves. The following example shows how to do this for debian_slave1 from the Domain-0 console:
# cd $PVM_ROOT/bin/LINUX # scp pvmpov root@debian_slave1:$PVM_ROOT/bin/LINUX/pvmpov # scp x-pvmpov root@debian_slave1:$PVM_ROOT/bin/LINUX/x-pvmpov # scp /usr/lib/libpng* root@debian_slave1:/usr/lib/ # scp /usr/lib/libz* root@debian_slave1:/usr/lib/ # scp /usr/X11R6/lib/libX11.* root@debian_slave1:/usr/lib/ # ssh debian_slave1 (remote)# cd /etc (remote)# mkdir xen (remote)# cd xen (remote)# mkdir benchmark (remote)# exit # cd /etc/xen/benchmark # scp -r * root@debian_slave1:/etc/xen/benchmark/