Converting from SCO Xenix to Linux
There are many customized versions of SBT's accounting system programs because SBT (http://www.sbt.com/) has been supplying source code with the multi-user versions of their software for many years. The system I upgraded from SCO Xenix to Caldera OpenLinux used SBT's Accounts Receivable/Inventory program as the basis for a point-of-sale system. Both the original SBT programs and the add-ons were written in SCO Foxbase plus, one of the dBase-compatible languages. My client, Steve Maxwell, had been using the program for about ten years to run his department stores, and was pleased with its performance and stability. Unfortunately, when I tested how well this software worked with dates in the year 2000, it failed. SCO Foxbase did not work correctly after 1999.
While I was looking into the available solutions to the problem, I found, in addition to an upgrade from SCO, a couple of dBase-compatible languages also available for Linux. I liked a product called FlagShip from Multisoft (sold in the U.S. by Linux Mall, http://www.linuxmall.com/), because my initial tests indicated it would compile the original SBT source code with only a few changes, which I'll detail later, and it produced very fast code. It was also the least expensive solution, at less than $600. FlagShip compiles the source code in a two-step process that converts the original dBase source into C code, which is then compiled by Linux's C compiler into a native executable.
From the client's point of view, one of the major advantages of the upgrade to Linux was it allowed him to use much of his existing hardware, including the many terminals, receipt printers, bar code scanners and so on, as well as his main computers. The only hardware we had to replace was a 60MB tape drive and a multiport serial card. Both were no longer manufactured and were not supported by Linux. We upgraded the multiport serial board to a Cyclades Cyclom Y board, and upgraded the tape drive to a Hewlett Packard IDE 5GB unit with BRU backup software. The ability to recycle major components saved tens of thousands of dollars. By using the original programs, we also avoided training costs.
Since this was my first Xenix-to-Linux conversion, I was fairly concerned I would run into an unforeseen problem that would make the project fail. In this regard, I was fortunate to have a very experienced client. Steve Maxwell had been involved in the development of the original Xenix version of the programs, and had been maintaining the system without outside support. I felt much better having a truly smart client around, because I ran into trouble right away: I couldn't get the information on the Xenix system over to the Linux system. Although Linux can read Xenix file systems, I was not able to get Linux to read the Xenix hard drive that held the original source code and data. I was able to get the information off of the Xenix system using a program Steve called Term, which has both Xenix and DOS versions. I wound up with all needed files on the DOS partition of my Linux development system. This turned out to be quite useful in that it allowed me to use all the DOS development tools, in addition to the Linux tools, and produced code I could debug in either environment.
I started the conversion by doing the Y2K work. I converted anything that had to do with dates from the original format of 8 bytes of character data (e.g., 01/01/90) to dBase's date type. This is the approach used by SBT for its DOS and Windows products. I used the SET CENTURY ON command to enable four-digit years, then made the necessary changes to convert the date where needed. This turned out to be relatively easy.
In the next step of the conversion, I built a stripped-down copy of the department store's system, consisting of a main computer and a single point-of-sale workstation (a dumb terminal, bar code scanner and receipt printer). I also set up some system printers in the configuration used by the store.
None of the available Linux terminal types worked, and when I logged out from the terminal, I didn't get a login prompt again. This was something I hadn't expected and was a pain to resolve. I eventually produced a custom terminfo entry by reverse engineering the Xenix termcap settings. The login problem was solved by using mingetty instead of the standard getty program. After resolving these problems I had a working single-terminal version of the system.
The way SCO Foxplus handles a multi-user file is different than the way FlagShip handles it. Basically, I stripped out the Foxplus code and added the FlagShip code as needed, an easy process, but a bit tedious since I had to add a block of FlagShip code at each point in the program the USE command appeared. In about a month, I had finished all conversions and installed the Linux hard drive (which I'd done the development on) as the main store computer. I kept the Xenix hard drive handy, in case there was some more trouble in the upgrade. Amazingly enough, the Linux version worked on the first try. After that, I made minor changes to the code to optimize the way the printers worked, but the Linux system was up and running and I never needed to go back to Xenix.
The original point-of-sale programs supported multiple stores by updating a set of master files through a complicated operation involving transaction files created at each store and a batch update process. I was able to simplify this by using Linux's powerful communication features to connect the branch store directly to the main store via the Internet.
I obtained two dedicated IP addresses from Gilanet, the local ISP that serves the two towns where the stores are located, and configured the automatic dialer program to bring up the links during store hours and shut them down when not needed. Bill Stites, the system administrator from Gilanet, provided great support in getting the Internet connections working. Bill uses Caldera OpenLinux internally for some of his Gilanet servers.
When we started this project, Steve Maxwell and I had wanted to complete the upgrade prior to the busy Christmas season. As it turned out, even with the problems, I was able to complete the work in about half the expected time. The system runs beautifully.
Fred Treasure can be reached via e-mail at email@example.com.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Tech Tip: Really Simple HTTP Server with Python
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide