Xen Virtualization and Linux Clustering, Part 2
Before we can generate our first ray-traced image, we need a scene to render. The PovBench Web site provides a POV-Ray scene called skyvase.pov that can be used for benchmarking purposes. Download this scene using the following commands:
# cd /etc/xen/benchmark # wget http://www.haveland.com/povbench/skyvase.pov
Next, copy the downloaded skyvase.pov file to each slave. For example, for debian_slave1:
# scp /etc/xen/benchmark/skyvase.pov # root@debian_slave1:/etc/xen/benchmark/skyvase.pov
Once you've copied the scene to all of the slaves, you are ready to generate an image. Be sure to boot the required Xen slaves and start PVM daemons on each slave. For example, before running PVMPOV with three slaves:
# xm create /etc/xen/debian_slave1.conf # xm create /etc/xen/debian_slave2.conf # xm create /etc/xen/debian_slave3.conf # pvm pvm.hosts pvm> conf conf 4 hosts, 1 data format HOST DTID ARCH SPEED DSIG master 40000 LINUX 1000 0x00408841 debian_slave3 c0000 LINUX 1000 0x00408841 debian_slave1 100000 LINUX 1000 0x00408841 debian_slave2 140000 LINUX 1000 0x00408841 pvm>quit
PVMPOV is run using the pvmpov binary. You also must supply an input file specifying the scene to be rendered, in our case, skyvase.pov. The list of supported PVMPOV command-line arguments is discussed in the PVMPOV HOWTO. As an example, the following command shows how to render the syvase.pov scene at 1024x768 resolution on three slaves, using 64x64 pixel blocks and storing the generated image in skyvase.tga:
# cd /etc/xen/benchmark # pvmpov +Iskyvase.pov +Oskyvase.tga +Linclude # pvm_hosts=debian_slave1,debian_slave2,debian_slave3 +NT3 +NW64 +NH64 # +v -w1024 -h768
The command-line arguments specify the following settings:
+Iskyvase.pov - Use skyvase.pov as input
+Oskyvase.tga - Store output as skyvase.tga
+Linclude - Search for POV-Ray include files (for shapes and the like) in the ./include directory
pvm_hosts=debian_slave1,debian_slave2,debian_slave3 - Specify which PVM hosts to use as slaves
+NT3 - Divide the rendering into three PVM tasks (one for each slave)
+NW64 - Change the width of blocks to 64 pixels
+NH64 - Change the height of blocks to 64 pixels
+v - Provide verbose reporting of statistics while rendering
-w1024 - The rendered image should have a width of 1024 pixels
-h768 - The rendered image should have a height of 768 pixels
On my system, this scene takes about 40-45 seconds to render. Once the program completes, you should find a file named /etc/xen/benchmark/skyvase.tga that contains the generated image. If everything worked correctly, congratulations! You just successfully used a Linux cluster to run a parallel ray tracer, all on a single physical computer running multiple concurrent operating systems. Go ahead. Pat yourself on the back.
And if things aren't working yet, don't give up. With a little troubleshooting, you're sure to figure it out--and believe me, I've done my fair share of troubleshooting.
Let's step back for a minute and think about everything we've accomplished here. We started by installing Xen and configuring Domain-0 as well as several unprivileged domains. During this process, we got practical experience using LVM to set up unprivileged domain filesystems, and we saw how we can create archive backups of an entire OS filesystem. We also learned how to set up a small cluster using PVM. We even tested our cluster using real-world parallel software.
By now, you should feel like an expert in using Xen virtualization and Linux clustering, especially if you had to do any troubleshooting on your own. If you made it this far, you now can mention the word "virtualization" and explain that your computer not only has multiple operating systems installed but it can run them at the same time! And if that doesn't impress some people, mention that your computer also doubles as a Linux cluster.
Ryan Mauer is a Computer Science graduate student at Eastern Washington University. In addition to Xen virtualization, he also dabbles in 3-D computer graphics programming as he attempts to finish his Master's thesis.