Linux Means Business: Linux and Banking
We are a small bank in a small country. Up until two years ago, our management policy regarding software was internal development—standard banking software price is about $1 million, and we were small enough to handle internal development. Our servers were Intel-based machines, running SCO and Informix SE.
Then management decided to change our business strategy, and soon we started to grow very fast. In less than a year, the number of employees doubled, and so did the number of PCs. Our LAN became a WAN, and the number of transactions multiplied five times and continues to grow. We could not ensure efficient software solutions for all new requests, so the decision was made to buy commercial applications. We also bought two IBM RS/6000 servers running AIX, connected in a high-availability cluster.
A year later, this decision may not seem too smart. We have our software, which handles 70% of our business, and a bunch of applications, communicating between each other with shell scripts, 4GL and C programs that perform needed data conversions. But development takes time, and we cannot employ a new application without at least three months of extensive testing.
More and more users needed Internet access, and we needed a dial-on-demand router. We didn't want specialized hardware. We wanted to be able to customise logs, schedule access time on a per-user basis and use an existing PC for that purpose.
After a week running NT Workstation with WinGate demo, we gave up. WinGate used to die two or three times a day. That would not be a problem in and of itself, since it was a demo anyway, but the COM port stayed locked every time it happened, remaining locked until reboot. Obviously, NT did not release allocated resources after process termination. Well, it did not look like a serious operating system to us.
Linux came in our focus accidentally. A friend of mine did quite a bit of advocacy and convinced me to try it, giving me precise directions and a Slackware 3.2 CD. Why not? If it works, fine; if it doesn't, it costs me nothing. After all, I am an experienced UNIX administrator—I can do it. So I took a 486 with a 1GB SCSI disk and 16MB RAM and started to play with Linux.
A few hours after inserting the CD into the drive, I had a working proxy, and I still have it. It has been working for a year now. All we have to do with it are the usual administration jobs: adding new users, kernel patches, etc. Furthermore, it became a caching proxy and news server. To be quite specific, there is diald, socks5 proxy, caching DNS, tcpd, Apache as caching HTTP proxy and internal web server, and inn as a news server for both local and Usenet newsgroups.
My colleagues were very suspicious of Linux, but started respecting it after a few months of uptime.
After the initial experience, I was very pleased with Linux and started to explore. As I discovered the purpose of the iBCS module, I had to run the SCO version of Informix on it. So, I copied Informix, our database and applications, tested it, and again it worked from the first try. I even found Linux a better development platform than SCO—it didn't require additional kernel parameter tuning to run complex database queries, it worked faster and was more stable, and it had better development tools.
Well, it's not quite fair to compare Linux to an obsolete product (SCO 4.2), but I tested concurrent queries and updates on millions of records, shut down the machine in the middle of work, killed system processes such as kswapd, tested power-off behaviour, and even pulled off the IDE disk cable on the working machine. Although I got database corruption, I didn't manage to produce unrecoverable file system errors. When fsck failed, fsdebug worked. For the record, we had unrecoverable file system errors on SCO. The result is that we now use Linux as our development platform.
Switching to Linux may not be easy for a Windows user, but we are used to working in a UNIX environment. We have set up Linux on a PC as a server for development and testing purposes and on three developers' PCs instead of Windows. We used Windows mostly as a task switcher for a few terminal sessions and for printer sharing. The only reason we used it was that it came pre-installed on every PC. Linux gave us additional possibilities: we have our native development environment on every PC, with good development tools such as Emacs, ddd and other good stuff. Since we are used to GNU tools, we installed them on both AIX and SCO. From a user's point of view, our AIX looks more like Linux now.
It's hard to believe that you can get a good development environment free when you used to pay thousands of dollars for one, but it's true.
When our users wanted to share Windows files across the WAN, we didn't even think about an NT Server, as Linux has already proven its reliability. So, we installed Samba as a WINS server. We did not set it up on AIX servers because they handle our critical applications and we just don't want to experiment on them. Today we use Linux/Samba for printer sharing, too, as it gives our UNIX systems access to Windows print servers, and Windows workstations access to UNIX printers. Basically, when a user wants to print something from UNIX, he can either print to a dot matrix or laser printer in the system room, or to a printer in his room, which may be connected to his PC or Windows-shared. When he chooses a local printer, the appropriate Windows print queue is determined by his IP address.
When we realized Samba's possibilities, we wanted to implement a centralized backup, based on the following idea: all users have their accounts on the Linux box, their Windows boxes mount their home directory upon startup, and all their documents are saved on Linux. Most of our users don't even know where their files reside; they just click on the icon and use the application, so we could change the working directory for Windows applications on every PC.
This policy has some other security considerations. First, only the logged-in user can read documents. Second, we receive some documents from the National Bank and other government organizations as Microsoft Office files. Once we even received a macro virus with them—it damaged only the user's local disk, but the user lost some important files. Before that happened, we had not been worried about viruses, since they come with cracked games, but now they may come with business data. This way, we can also do periodic virus scans, and file permissions are set in such a way that a user can write to only his home directory, thus a potential virus can't spread around. But we haven't fully implemented this yet, because we have to change every PC's network settings to mount the Linux drive and move all files to it. We also have problems forcing our users to log onto Windows with their Linux user names and passwords and explaining to them how to change the Windows password every time their Linux password expires.
One day, we received a request to allow access to an old Clipper application from a remote location. If we had just installed it on the remote PC and mapped the network drive to our machine, the Clipper database would go over the modem line, so we would have to increase bandwidth. What's more important, we would risk database corruption, and it's also the same story with viruses, since we have no physical control over remote locations. Linux took a role again; we installed the DOS Emulator. A few smaller problems arose. Clipper is never idle, so DOSEmu takes all available CPU time. Since we have only a few Clipper users, we just gave DOSEmu a smaller priority. This way, only screens are traveling the network, and with ttysnoop, we can see exactly what a user is doing. Also, there's no database corruption when Windows dies—Linux doesn't die.
Linux has made our life much easier, especially on that Monday morning when the hard disk on our SCO server didn't wake up. We had no up-to-date database on our backup server, since the previous weekend the entire bank moved to a new location. The backup server was moved first, so there was no overnight database copying. We had a tape backup, but it needs at least an hour and a half to restore it. We had the Informix log file, which takes at least the same time to roll forward the database. Luckily, we had an up-to-date database on our development server; unfortunately, copying over the network takes about an hour. Our customers were waiting, so we chose to drive the bank on Linux for a day. The server was not really a server but a PC with P133, 16MB RAM and an IDE disk, so there was a lot of swapping, but most users didn't notice any difference.
Today, Linux does quite a good job for us, and so do GNU programs. Our complete development is done on Linux. C programs are then recompiled with the GNU C compiler on either SCO or AIX; 4GL programs are just copied to the proper place. Bash became our standard shell on both AIX and SCO. CVS takes care of the version control. Communication with remote locations is also handled by Linux, and so is printer spooling. With the proxy, the internal web and news, Linux played a significant part in our job. Anyway, it's not yet ready to be used as the database and application server for our purposes, because it still lacks a few features needed in our mission-critical applications: the high-availability cluster software and a journaling file system. We cannot afford any data loss, and that's the main reason we have chosen AIX for certain applications.
At the end of this year, we plan to migrate our database, applications and development completely to AIX, so we'll stop using SCO, but Linux will stay. It works very well and has greatly improved our security and administration.
Josip Almasi (firstname.lastname@example.org) has been in database design/programming for ten years, and in Novell and UNIX system administration for the last five years. He is currently working as a database/system administrator at Partner Bank, Croatia. His hobbies are Aikido and blues harmonica.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide