The OSCAR Revolution

Richard describes the history and goals of the Open Source Cluster Application Resource.
The OSCAR Experience: First Impressions

The first thing one notices when untarring the OSCAR file is that the OSCAR integration and test team has done a thorough job; there is extensive documentation on how to install OSCAR, the system requirements, the licensing (GPL) and the theory behind OSCAR itself. There is a quick start guide for the impatient cluster administrator, as well as a full descriptive text. One also notices that there's nothing additional to download; it's all included in the single OSCAR tar file. OSCAR takes the traditional view of clusters—a single server with N compute nodes; the server is responsible for installing, scheduling and monitoring the compute nodes. Nodes in the cluster should be running homogeneous software, meaning the same distribution and version of Linux. The first command the user enters is install_cluster, which does a multitude of things: creates necessary directories; manages NFS and xinetd; installs LAM/MPI, C3, PBS, Maui, OpenSSH, SIS, Perl, SystemImager and MPICH; updates various profiles and configuration scripts; and launches the OSCAR wizard.

If all goes well, you're in for a pleasant surprise, namely, the OSCAR wizard. The OSCAR team felt the wizard would be another distinguishing feature of OSCAR in the field of Linux cluster solutions. The purpose of the wizard is clear—follow the wizard and you too can install a cluster painlessly. Each step along the wizard's path has entry and exit criteria. Once the exit criteria is successfully met, OSCAR gives a success message to indicate it's safe to move on to the next step.

Figure 1. The OSCAR Wizard

Following the wizard, pressing the Build OSCAR client image button brings up the second panel, the Create a SystemImager Image panel.

Figure 2. Building a SystemImager Image

The purpose of the SystemImager panel is to create a filesystem on the server that will later be installed on each client. The Image Name field allows the user to create multiple SystemImager images, each with a unique name. The Package File field provides a list of packages that will be installed on the client; OSCAR provides sample lists that meet most user requirements. The Packages Directory tells where the RPMs are coming from, and the Disk Partition File field allows the user to customize the disk partitions. Again, OSCAR provides default disk partition definition files for both IDE and SCSI drives. Pressing the Build Image button starts the process of building a client image on the server. Once complete, it's time to go back to the wizard for step two, defining the OSCAR clients.

Figure 3. Adding Clients to a SystemImager Image

From the Add Clients panel, the user can specify a range of IP addresses to be associated with a list of new clients. Each client is associated with an image name using the Image Name field. One can define a set of clients in a range of IP addresses, each having the same netmask and default gateway. Pressing the Addclients button builds client definitions for SIS. Once complete, it's back to step three on the wizard, Setup Networking.

Figure 4. MAC Address Collection

From the Setup Networking panel, MAC addresses are collected for each client in the cluster. If the node is capable of true network (PXE) boot, you simply associate a MAC address with a client, and you're ready to power up the node. If the node is not PXE-enabled, you can write a SystemImager boot diskette from the Build Autoinstall Floppy button. Once the MAC addresses are collected, it's time to press the button to Configure the DHCP Server and boot all the nodes to initiate Linux installation.

Once all the nodes are installed, each node starts this really annoying and incessant beeping telling the system administrator to pop out the diskette or turn off PXE and reboot the node from the hard drive. Once they are all booted, the nodes are ready to Complete Cluster Setup from the wizard (really just syncing the time between servers and clients and running any package-sensitive postiinstallation scripts). The Test Cluster Setup button from the wizard runs short jobs, checking each flavor of scheduler and parallel library.

Once the cluster is fully installed and functioning, there are test scripts to check the overall health of the cluster. Running the test_install script will check to make sure PBS or Maui Scheduler is configured and running, that the C3 tools are installed and that the cluster at that time is ready to start accepting parallel jobs.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: The OSCAR Revolution

Anonymous's picture

As Senior Executive Manager of Product Operational Testing (POT) at the Maui "High Times" Computing Center, let me say that we're like totally stoked that the OSCAR dudes are using Maui Wowee scheduler in their groovy software!

We're gonna be like helping out with their upcoming Benchmark Oscar for the Next Generation (BONG) project. Oops maybe I wasn't sposed to mention that yet, but kudos all around and oh yeah I forgot to mention that we now print all of our documentation on like organically grown hemp stock. But it mostly just gives you a headache (reading or smoking it). Bummer.

specialT@mhtcc.com

Ericsson and OSCAR

ibra's picture

One of the projects at the Open Systems Lab (Ericsson Research) is the ARIES project
that targets improving the clustering capabilities of Linux to fulfill the carrier class requirements. ARIES shares some overlapping activities with the OSCAR project. However, the typical Ericsson Linux cluster supports many high-end characteristics that are not available on an OSCAR cluster.

Telecommunication systems are one of the several potential specialized platforms that can take full advantage of clustering. These systems support some of the most stringent requirements in terms of reliability, availability, and scalability. They must be available 99.999 percent of the time which includes hardware and software upgrades (including operating system) for any mission critical server applications. Among these characteristics are build-in redundancy schemes at different levels such as redundant Ethernet connections, redundant Network File System servers, and software RAID support for data redundancy, special methods for booting diskless nodes, optimized traffic distribution and
load balancing schemes and so on.

As part of Ericsson

Re: The OSCAR Revolution

Anonymous's picture

OSCAR 1.2.1rh72 is available, which supports RedHat 7.2. Future versions will support Mandrake distributions as well.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState