The Politics of Porting

The flagship application was headed for the rocks. One man seized the wheel and guided the ship to safety—but will he be keelhauled for acting without orders?
I “Steal” the Code

At this point I should say “Don't do what I did.” It is illegal to copy your employer's source code without permission, and you are likely to find yourself at the wrong end of a lawsuit if it all goes belly up, but sometimes dire circumstances require drastic solutions.

In late November 1998, I checked out a full set of source code and, along with a newly downloaded copy of Oracle 8.0.4, I set up the development environment on my PC at home; using the unedited Makefile from the Solaris development directory, I ran make. Not surprisingly, it all fell in a heap, throwing up pages of compile errors.

I devoted an hour or two most evenings to stepping through the errors and debugging them. Those I didn't understand I presented to Richard Glover, one of the senior UNIX developers at Constellar. He also was the release manager for the various UNIX versions of the Hub, so he was a good person to know.

Most of the problems revolved around the Oracle ProC precompiler, which didn't recognise some Linux-specific directives, such as the include_next statements in the Linux header files. This was resolved by copying the relevant files, stripping the #include_next statements out and placing the location of our customised version of these files first in the include path.

There also were assorted issues with library paths. These were trivial and related solely to the setup on my home PC. They were solved by adding entries to the LD_LIBRARY_PATH shell variable exactly as it would be with Solaris. Some macro definitions—ULONG_MAX, INT_MIN, INT_MAX, LONG_MAX—were missing from the copy of Red Hat that I was using at the time. I copied them from the relevant Solaris headers as a temporary fix.

I left some minor problems unresolved because my resources were somewhat limited. For instance, the Hub was required to connect to many and varied sources of data, and some of these required the use of IBM MQ series libraries, which I did not have available at the time. In any case, I did not have the facilities to test such functionality at home. I edited the main Makefile to disable the linking of these libraries and accepted that my version of the Hub would be a lightweight, more dynamic kind of Hub. If it compiled at all, I would be happy.

Some differences found in the Linux version of various utilities were sorted out by editing scripts to reflect the proper invocation of particular utilities. I worked my way around some of them, and I ignored others to save time. Richard would later track down the source of all these problems and make allowances for them in the platform-specific sections of various build scripts in the official build environment at work. Those utilities that required changes to the standard build scripts included various compiler and linker flags. We were obliged to use the Sun compiler for the official Solaris build, so moving to Linux and GCC required a bit of translation with regard to the flags and switches. The use of df, the shell built-in echo command, ftp, ldd, mknod, nm, ps and lex/yacc (used for the proprietary Transformation Definition Language) all required changes when invoked.

As Richard pointed out a few weeks later, the set -o posix option in bash resolved nearly all the shell script differences, and in any case, the command-line differences were quite trivial and the solution obvious. In some cases, it was simply a case of providing Berkeley versions of the utilities rather than the System V ones, or vice versa. The situation would be different today, because many of the GNU utilities standard on Linux are now included as standard with Solaris, so porting between the two has become even simpler.

In all, it probably took about two man-days to get a rough-and-ready port with a binary that at least looked like an executable.

The Smoke Test

I had a binary, now I needed to test it. I checked out a copy of the suite of test scripts and data used in the daily smoke test at work. After setting the environment variables for the smoke test, I set the test harness running.

The usual smoke test consisted of more than 200 transactions, and on the other platforms would take anything from six hours for UNIX versions to more than 24 hours for the Windows NT version—this is not good when testing a daily build. I ran the test on my home-built Linux port, and within a minute or two I had errors scrolling up the screen. Disaster. I checked the error logs, but they were no help. For some reason, the logs didn't provide any diagnostics for the first few dozen transactions, and this confused me.

It took a while to figure out that the reason there were no diagnostics in the error log for the initial set of transactions was simply because the Linux Hub already had run the first few dozen transactions successfully! This initially was hard to believe, because the same set of transactions took considerably longer to run on most of the other platforms to which the Hub was ported. This meant that my rough homebrew Linux version of the Hub was, without tuning, already faster on my home PC than some of the other UNIX ports, and several times faster than the Windows NT port. Even allowing for the differences between the hardware specifications on which these varied ports ran, it looked good.

I felt sure that if these results could be reproduced in the workplace, there would have to be an official Linux port.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState