Enterprise-Level Health-Care Applications
Today most health-care companies are struggling to find ways to improve patient care, reduce costs and provide more and better service. They increasingly see technology as a solution to these requirements. Our company is no different. We have approximately 1,300 employees in 13 states and 40 offices. Our existing patient care system and financial accounting system were developed over ten years ago and were not designed to handle the requirements of a large, distributed organization.
Many members of our executive team have backgrounds in high-volume claims processing, where millions of claims have to be processed daily on high availability systems. Our CEO and the executive team defined the following requirements for our new IT applications:
It must be scalable, portable, reliable and secure.
It must address the requirements of the Health Insurance Portability and Accountability Act (HIPAA).
It must integrate all our applications and provide business-to-business (B2B) capability.
It must meet the “four As” of availability: accessible, anytime, anywhere and on any device.
It must be cost effective.
Scalability is a matter of survival. Many vendors have said, “hardware is so cheap, just throw more hardware at it.” This is a good solution until you are responsible for a budget and getting it past a CFO. In our situation we must not sacrifice one byte of memory, one processor cycle nor one block of disk to bloated, inefficient applications and systems.
Portability and scalability are related. If our application is written on an Intel platform, but must run on a mainframe to perform as needed, that application must be portable to different operating systems running on different hardware. Our hardware selection today may change a year from now based on speed, cost, support, reliability and available talent to run the system. We can't replace our applications whenever we change our hardware.
HIPAA, passed by the US Congress and signed into law in 2000, has tremendous implications for health-care companies. These laws take effect in 2002. HIPAA's authors intended to standardize how claims are processed, how data is exchanged between companies and how patient information is accessed and stored. The applications and systems we run must conform to HIPAA requirements and must change with the regulatory environment. Health-care companies will spend two to three times the cost of Y2K on HIPAA conformance, according to industry analysts.
Industry analysts also claim that health care is one of the last industries to take significant advantage of advances in information technology. Many health-care companies' information systems run on old, sometimes very old, technology, and our systems must communicate with them. When we enroll a patient in our medical information system, we must automatically enroll that patient in our pharmacy provider's system so the patient can fill their prescriptions anywhere in the US the same day. We must also send information to our medical supply company to allow the patient to get medical supplies without excessive delay.
Our company evaluated over 50 health-care applications. None could address our needs. Some companies had just invested millions of dollars to go from DOS application to client/server, 32-bit Windows applications. Some had DOS applications and just looked at us funny when we tried to explain what we wanted. In the end we found nothing that fit our needs. Our solution was to put together a team of highly experienced developers with backgrounds in high-volume transaction processing and build a transaction backbone and the applications to run on top of it.
Our first task was to define the broad outlines of the architecture. We wanted an N-tier architecture, but beyond that we wanted one that placed no limits on possible solutions. We evaluated several transaction engines and application servers. All failed to satisfy our requirements. Commercial transaction engines were either extremely expensive to purchase, maintain and develop on or they were too inflexible or unreliable for us.
How much reliability do we require? If an e-tailer goes down .1% of the time, about nine hours per year, the worst likely outcome is the loss of some orders. However, when a nurse needs to access patient records to learn a patient's medication and dosage, downtime is a different issue. It's simply not acceptable.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide