CORBA Program Development, Part 2

This month, the more advanced techniques of naming and event services are discussed.

In our last article, we introduced the concept of distributed programming with CORBA from a high-level point of view. In order to further flush out the CORBA infrastructure, we need to detail some of the standard services that the OMG (Object Management Group) has defined that should be supplied at least in part by most ORB vendors. Among these are the Trader Service, the Naming Service, the Event Service, the Interface Repository and the Implementation Repository.

The OMG has defined only the interface to each service while not attempting to provide an implementation. This means an OMG Service is actually nothing more than a CORBA interface written in IDL (Interface Definition Language). If a particular service is not available within a particular ORB or is not well-implemented, the developer always has the option of writing a custom implementation for the interface. In fact, if a vendor is truly CORBA-compliant, one vendor's implementation of a service can be used with another vendor's implementation of the ORB. This ability to mix and match CORBA-compliant implementations allows for flexible approaches to CORBA solutions. In this article, we will describe two of the most commonly provided OMG Services: the Naming Service and the Event Service. Our sample code is written using the feature-rich and GNU-licensed MICO CORBA implementation and demonstrates how to use both the Naming and Event Services in C++.

Last month, we introduced the concept of an IOR (Interoperable Object Reference), which we said was like a phone number or mailing address for the remote object. The client application can use the IOR to locate the remote object and establish communication. In that article, we handed the client application the IOR by writing it to a file and passing the file to the server application at startup. In practice, this is an inconvenient way to design a system. One of the most common approaches to solving the problem of locating objects at runtime is to use the OMG Naming Service. The Naming Service is an interface to a database where an object's name is associated with its IOR.

In order to understand the Naming Service, it is often helpful to think in terms of the UNIX directory structure. The Naming Service is comprised of objects called naming contexts. A naming context can be thought of as a directory within a file system, ultimately deriving from a common root directory (the “root” context). Each name within a naming context must be unique. Since naming contexts are actually objects, a naming context can be registered with another naming context. In effect, this is analogous to creating a subdirectory within another directory in a file system. The hierarchical structure created by this method is called a naming graph. In order to simplify finding objects within a naming graph, the Naming Service allows objects to be referred to by compound names, which are similar to an absolute path name in UNIX.

The name under which an object is registered in the Naming Service is completely discretional and not required to even describe the actual object. In the Naming Service, the object's name is defined by a NameComponent object. These NameComponent objects are then stored in a particular naming context. The NameComponent object actually consists of two parts, an “identifier” and a “kind”. The NameComponent is represented in IDL as:

struct NameComponent {
Istring id;
Istring kind;
};

Returning to the UNIX file system analogy, a UNIX file called Consumer.C would have an identifier of Consumer and a kind of C. In the same manner, an object may be stored in a naming context with an identifier of BusinessObject and a kind of java. The developer can thus use any naming standard he wishes when defining objects using the Naming Service.

In order for a CORBA client object to use the Naming Service to find other objects, it must know where to find the naming service. The preferred method of finding the Naming Service is to use the OMG method resolve_initial_references. Under most ORB solutions, resolve_initial_references will return the IOR of the “root naming context”, or in effect, the root directory node.

In simplest terms, when a server application is launched, it registers or “binds” objects it wishes to expose with the Naming Service using compound names. This is accomplished through the bind and rebind methods. The client application can then look up a particular object's IOR simply by resolving the object's compound name, which the client must know. The client application uses the resolve method to find an IOR from a given compound name. Once the name has been resolved and the IOR obtained, the application can narrow (narrowing an object is CORBA terminology for downcasting) the object reference to resolve the actual object implementation; from that point on, the object can be used as usual. Later, our example demonstrates how you might use the Naming Service to register and locate object implementations.

Another service with an OMG-defined interface is the Event Service. The OMG Event Service specification provides for decoupled message transfer between CORBA objects. The decoupling of communication provided by the Event Service allows for flexibility in terms of communication modes and methods. Specifically, it allows one object (Supplier) the ability to send messages to another object (Consumer) that is interested in receiving those messages without having to know where the receiver is or even whether the receiver is listening. This decoupling provides several important benefits:

  • Suppliers and Consumers do not have to physically handle the communication and do not need any specific knowledge of each other. They simply connect to the Event Service, which mediates their communication.

  • Message passing between the Supplier and Consumer takes place asynchronously. Message delivery does not need to entail blocking (although a pull Consumer may choose to block if it wishes—see below).

  • Event Channels can be set up to be either typed or untyped (not all ORB implementations support typed events).

  • Event Channels will automatically buffer received events until a suitable Consumer expresses interest in the events. Note that this does not imply either persistence or store and forward capabilities. Generally, an independent queue in the Event Channel will be devoted to each Consumer. These internal queues are generally based on a LIFO (last-in first-out) basis, with older messages disposed when the buffer is full and new messages arrive, without a Consumer extracting the messages fast enough. Most ORBs will allow you to set the maximum queue length.

  • Events can be confirmed and can have their delivery guaranteed, if the vendor has implemented this capability.

  • Suppliers can choose to either push events onto the channel (push) or have the channel request events from them (pull). Similarly, a Consumer may request to either synchronously (pull) or asynchronously (try_pull) obtain events from the channel, or have the channel deliver events to them (push).

  • A one-to-one correspondence between Suppliers and Consumers is not necessary. There can be multiple Suppliers connected to a single Consumer via the Event Service, as well as a single Supplier connected to one or more Consumers.

Two primary styles of interaction exist between Suppliers and Consumers and the Event Channel: Push and Pull.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState