Linux Network Programming, Part 3
As mentioned, RPCs follow the traditional functional model. State information may need to be maintained independently at both client and server (depending on the type of application). This data is often repeatedly re-transferred across the network on each remote function call.
An alternative architecture is to use techniques from object-oriented development and to partition the system into a set of independent objects. Each object is responsible for maintaining its own internal state information.
By using an object-oriented approach to your network software development, you can promote certain beneficial traits in your code:
Encapsulation: ensuring a clear separation between the interfaces (through which the objects in your system interact with one another) and their implementations
Modularity, scalability and extensibility
Re-usability (of code and, perhaps more importantly, of design)
Inheritance and specialization of functionality and polymorphism
The act of sending a message from one entity on a network to another is remarkably similar to one object invoking a method on another object. The integration of distributed network technology and object-orientation unites the features of a basic communications infrastructure with a high-level abstraction of these interfaces and a framework for encapsulation and modularity—through this, developing applications which inter-work is significantly more intuitive.
In 1991, a group of interested parties joined to form the Object Management Group (OMG)--a consortium dedicated to the standardization of distributed object computing. The OMG supports heterogeneity in its architectures, providing the mechanisms for applications written in any language (running on any operating system, any hardware platform) to communicate and collaborate with each other—in essence, the development of a “software bus” to allow for implementation diversity, as a hardware bus does for expansion cards.
The OMG architecture which permits this distributed collaboration of objects is called the Object Management Architecture (OMA). Figure 2 shows the object management architecture.
CORBAservices provide the basic functionality for the management of objects during their lifetime—for example, this includes:
Naming (uniquely specifying a particular object instance)
Security (providing auditing, authentication, etc.)
Persistence (allowing object instances to be “flattened” to or created from a sequence of bytes)
Trading (providing objects and ORBs a mechanism to “advertise” particular functionality)
Events (allows an object to dynamically register or unregister an interest in a particular type of event, essentially decoupling the communication from the object)
Life-cycle (allows objects to be created, copied, moved, deleted)
Common Facilities provide the frameworks necessary for application development using distributed objects. These frameworks are classified into two distinct groups: horizontal facilities (commonly used in all applications, such as user-interface management, information management, task management and system management), and vertical facilities (related more to a particular industry, for example telecommunications or health care).
The CORBA standard specifies an entity called the Object Request Broker (ORB), which is the “glue” that binds objects together to enable higher-level distributed collaboration. It enables the exchange of CORBA requests between local and remote objects. Figure 3 shows the architecture of CORBA. Figure 4 shows the invocation of methods on different remote objects via the ORB.
In the OMA, objects provide services. Clients issue requests to different objects for these services to be performed on their (the client) behalf. The repetitive transmission of state information that is common with RPC applications is avoided since each object is responsible for maintaining its own state. In addition, the objects interact through well-defined interfaces and are unaware of each others' implementation details. As such, it is much easier to replace or upgrade an object implementation, as long as the interface is maintained. The objects in an OMA/CORBA system may take on many different roles in relation to one another: peer-to-peer, client/server or publish/subscribe, etc.
Before an object can issue a request to invoke a method on an object, it must have a valid reference for that object. The ORB uses this reference to identify and locate the object—thus providing location transparency. As an application writer, you need not be concerned with how your application finds an object, the ORB performs this function for you transparently. In a similar fashion to how RPCs use XDR, CORBA specifies the common data representation (CDR) format to transfer data across the network.
An object reference does not describe the interface of an object. Before an application can make use of an object (reference), it must somehow determine/know what services an object provides.
Interfaces to objects are defined via the Interface Description Language (IDL). The OMG IDL defines the interface of an object by means of the various methods they support and the parameters these methods accept. Various language mappings exist for the IDL (for example, C, C++, Java, COBOL, etc.). The generated language stubs provide the application with compile-time knowledge which allows these interfaces to be accessed.
The interfaces, alternatively, can be added to a special database, called the interface repository. The interface repository contains a dynamic copy of the interface information of an object, which is generated statically via the IDL. The Dynamic Invocation Interface (DII) is the facility by which an object client can probe an object for the methods it supports and, upon discovering a particular method, can invoke it at runtime. This involves looking up the object interface, generating the method parameters, invoking the method on the remote object and returning the results.
On the “server” side, the Dynamic Skeleton Interface (DSI) allows the ORB to invoke object implementations that do not have static (i.e., compile time) knowledge of the type of object it is implementing. All requests to a particular object are handled by having the ORB invoke the same single call-up routine, called the Dynamic Interface Routine (DIR). The Implementation Repository (as opposed to Interface Repository) is a runtime database of information about the classes the ORB knows of, its instantiated objects and additional implementation information (logging, security auditing, etc.).
The Object Adapter sits above the core ORB network functionality. It acts as a mediator between the ORB and the object, accepting method requests on the object's behalf. It helps alleviate “bloated” objects or ORBs.
The Object Adapter enables the instantiation of new objects, requests passing between the ORB and an object, the assignment of object references to an object (uniquely naming the object), and the registering of classes of objects with the Implementation Repository.
Currently, all ORB implementations must support one object adapter, the Basic Object Adapter (BOA).
All of this talk about interoperability is not useful unless ORBs from different developers/vendors can communicate with one another. The General InterORB Protocol (GIOP) is a bridge specifying a standard transfer syntax and a set of message formats for the networking of ORBs. The GIOP is independent of any network transport.
The Internet InterORB Protocol (IIOP) specifies a mapping between GIOP and TCP/IP. That is, it details how GIOP information is exchanged using TCP/IP connections. In this way, it enables “out-of-the-box” interoperability with IIOP-compatible ORBs based on the world's most popular product and vendor neutral network transport—TCP/IP.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Devuan Beta Release
- May 2016 Issue of Linux Journal
- EnterpriseDB's EDB Postgres Advanced Server and EDB Postgres Enterprise Manager
- The US Government and Open-Source Software
- The Humble Hacker?
- BitTorrent Inc.'s Sync
- Open-Source Project Secretly Funded by CIA
- The Death of RoboVM
- New Container Image Standard Promises More Portable Apps
- AdaCore's SPARK Pro
In modern computer systems, privacy and security are mandatory. However, connections from the outside over public networks automatically imply risks. One easily available solution to avoid eavesdroppers’ attempts is SSH. But, its wide adoption during the past 21 years has made it a target for attackers, so hardening your system properly is a must.
Additionally, in highly regulated markets, you must comply with specific operational requirements, proving that you conform to standards and even that you have included new mandatory authentication methods, such as two-factor authentication. In this ebook, I discuss SSH and how to configure and manage it to guarantee that your network is safe, your data is secure and that you comply with relevant regulations.Get the Guide