The Politics of Porting
At each of the companies where I have worked in the past eight years, I have introduced Linux in some capacity. It often seemed the obvious choice, but what was obvious to me and other colleagues was not obvious to everyone. Some people needed persuading before they would accept what to them was an unconventional solution. In most cases, benchmarks and cost factors were enough to tip the balance. Novelty and openness to new ideas were sufficient for others. But, for a stubborn minority, nothing could persuade them. Although they participated in the debate, it appeared that for them discussion was simply a tactic to avoid implementation.
Faced with such opposition, arguments are pointless and demonstrations are a waste of time. In those situations, other tactics are called for. Here I describe how I persuaded my employer to port the company's flagship application to Linux literally before the opposition in management knew what was happening.
In 1998, I was UNIX and Oracle administrator for a company called Constellar Ltd. They specialised in data migration and Enterprise Application Integration (EAI) solutions for Global 2000 companies. What does that mean? At its simplest, it might mean transforming the data in a legacy mainframe database for insertion into a newly prepared database. Such tasks are by their nature repetitive and time consuming. They are ideal candidates for automation and that is where the Constellar Hub comes in.
Add a few extra offices in different parts of the world, more databases for customer data, warehouse inventories and order tracking and imagine how these assorted data stores and applications could be made to work together seamlessly. Throw in a live environment and a company that relies on accurate available data, and you begin to understand how complex the task could become.
The main application was a client server called the Constellar Hub, which was designed to extract data from disparate sources, processing, weeding and integrating it on the fly before writing or streaming the transformed data to its destination.
The server relied on an Oracle database to store data in transit, as well as the metadata in which the business rules were held, data types were defined and the relationship between the multifarious data sources and destinations were stored.
The combined transient data and metadata were held in upward of 200 database tables with many triggers and constraints to enforce the integrity of the data as it flowed through the bowels of the application. The engine of the application consisted primarily of C and SQL code, but an API provided the opportunity to extend functionality if required.
On a large project, two or many Hub servers might be required, sometimes located at remote sites around the world, feeding or being fed data through frame relay or less likely the Internet. Typically, an average project would process many gigabytes of data and usually hundreds, thousands or, more rarely, tens of thousands of business rules would be defined.
In many cases, the whole project of which the Hub was only a part—albeit a central part—would cost in the millions and involve dozens of technical and business staff.
Prospective customers would be given a relatively simple demonstration of the tool running on test data, typically on Sun hardware. The server had been ported to various other shades of UNIX, including Dynix, Data General UNIX, AIX and Digital UNIX.
It was also around that time that Windows NT was being promoted as an enterprise-ready OS destined to take over in many areas previously thought to be the exclusive domain of UNIX. Indeed, our vice president of engineering was convinced that the future was Microsoft and told us so. In the future, he said, most if not all our sales would be generated from the Windows NT version of the Hub. As a first move, they already were working on a Windows NT port and had announced the beta version in February 1998. Such a view was understandable, because the cost of a Windows NT box versus the various UNIX platforms tended to highlight only the expense of UNIX solutions. Linux was destined to change all that.
By that time, I already was using Linux on all my home PCs as well as my workplace workstation. I also followed Linux developments fairly closely, especially any that might affect the adoption of Linux in the enterprise workplace. Like many other Linux advocates, I saw the possibility of a computing future that might not be restricted to the products of a single company imposed on a captive customer base and wondered if the Constellar Hub could be ported to Linux.
There were two problems, however. First, the Hub made extensive use of Oracle databases and Oracle-specific features, and it would be a nightmare porting those dependencies to a Linux-friendly equivalent. Second, and this was a more difficult hurdle to clear, a few people in the company were not predisposed to see the opportunities presented by a Linux port.
I vaguely had suggested to one or two members of middle management that perhaps there ought to be a Linux port, but the response I received in return was along the lines that Linux is an amazing product for a bunch of amateurs to have produced, but it wasn't quite ready for the big leagues, and the company couldn't afford the time and expense of porting to it when there was serious business to deal with. I have heard that argument many times since then and have learned it is largely an excuse for not thinking.
Two things changed the nature of the ball game. First, in September 1998, Larry Ellison announced, with excellent timing, that Oracle would release a Linux version of its database server and made it free for download for development use. This had a dual effect. It demonstrated that some very large companies were taking Linux seriously, and at the same time, it made the possibility of a Linux port of the Constellar Hub feasible.
I decided that if I couldn't persuade them to port the Hub I would do it myself, so I copied the Hub source code and set about providing a proof of concept. I figured this either would get me the sack or provide a fait accompli with which those in opposition could not argue.
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
|What I Wish I’d Known When I Was an Embedded Linux Newbie||Aug 18, 2016|
|Pandas||Aug 17, 2016|
|Juniper Systems' Geode||Aug 16, 2016|
|Analyzing Data||Aug 15, 2016|
- Updates from LinuxCon and ContainerCon, Toronto, August 2016
- NVMe over Fabrics Support Coming to the Linux 4.8 Kernel
- What I Wish I’d Known When I Was an Embedded Linux Newbie
- New Version of GParted
- All about printf
- Analyzing Data
- Tor 0.2.8.6 Is Released
- Blender for Visual Effects
- Juniper Systems' Geode
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide