Help with Designing or Debugging CORBA Applications
This article explores how I have added some useful extensions to an open-source protocol analyzer in order to allow the extraction of OMG IDL (interface definition language) defined data types from TCP/IP traffic (using GIOP/IIOP). I also discuss the development and use of a helpful tool (idl2eth) that can take your own OMG IDL file(s) and generate protocol analyzer plugins, and lead you through the steps of creating your own plugin for the CORBA project you are working on.
So, if you are designing or debugging a CORBA application that is using GIOP/IIOP, you can take the CORBA IDL file(s) and generate plugins that will decode your data and show you what it looks like "on the wire".
This would be most useful for debugging, say, traffic between a client and server or for learning how OMG IDL data types are marshaled via CDR (common data representation) Transfer Syntax onto an octet stream. It also would aid ORB interoperability testing, especially considering the number of new ORB implementations that are appearing lately.
For brevity, I'll refer to OMG IDL as IDL in this article. It is important, however, to remember that there are other forms of IDL besides OMG IDL in use daily. They will not be discussed here.
With the proliferation of so many IP-based protocols in today's networks, it is hardly surprising that the need arises for good protocol analyzers. They generally have the ability to monitor many different types of traffic and to sort or filter them according to the users' wishes.
The resulting data normally is captured (optionally passing through one or more capture filters) and also displayed (optionally passing through one or more display filters), so that one can sort out the interesting packets from the rest.
A wonderful example of an open-source protocol analyzer is Ethereal (www.ethereal.com). It is a highly capable analyzer that compiles and runs (thanks to autoconf) on many flavors of UNIX (including Linux), Windows, MacOS X, etc.
It can capture packets from a number of different types of networking devices and also can read capture files taken earlier using either Ethereal or other programs such as tcpdump, snoop and various other network analyzer programs. The following is a list of capture file formats it can read (Ethereal's native format is the libpcap format also used by tcpdump, so captures taken with Ethereal can be read by other programs that read tcpdump captures, such as tcpdump itself): libpcap/tcpdump, snoop, Shomiti, LANalyzer, MS Network Monitor, AIX iptrace, NetXray, Sniffer, Sniffer Pro, RADCOM, Lucent/Ascend debug output, Toshiba ISDN router snoop output, HPUX nettl, ISDN4BSD i4btrace utility, Cisco Secure IDS and pppd log files (pppdump format).
Ethereal also understands many different protocols, from AARP to ZEBRA, and a lot of others in between. I would estimate that it understands in excess of 200 protocols according to the latest Ethereal CVS tree.
Some of the more common protocols it can decode are DNS (domain name service), NFS (network file system), Telnet, SMB (server message block protocol), HTTP (hypertext transfer protocol) and POP (post office protocol).
It also has the ability to follow TCP streams and has useful filtering capabilities. (For the impatient, you can jump ahead and look at Figure 4 to see Ethereal in action.) And it has the ability to incorporate new protocols as either plugin dissectors or built-in dissectors.
Built-in dissectors are compiled as part of the Ethereal binary, whereas a plugin dissector is a shared library and loaded at runtime. This helps keep the Ethereal binary down to a reasonable size and also allows protocol dissection code to be distributed as a shared library if you wish. It also means, because its a shared lib, if you modify the plugin and run make again, the Ethereal binary is not recompiled, and this saves a lot of time.
Like most things in life, it is when we are faced with a problem to solve that we seek out solutions. For our staff at Ericsson Inc., based in the Telecom Corridor in Richardson, Texas, we were interested in debugging some CORBA traffic on the wire to support our R&D efforts involving CORBA applications.
I was already familiar with Ethereal and saw that some initial code to decode the GIOP header information was present in Ethereal, but that it had progressed no further. This is partly due to the fact that once the header information is decoded, then the payload is highly dependent on which CORBA operation is being called at that time. The operations that, for example, a CORBA server (or servant) provide are defined using OMG IDL.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide