The Complexities to Creating Real Electronic Health Records
If you have been paying attention (even if you do not live in the United States), you know that the U.S. Congress passed a broad, sweeping piece of legislation, purported to reform health care in the United States. What you probably do not know (even if you do live in the United States) is that this is not the first piece of legislation to target the issue of health care. In fact, a number of pieces of legislation, and a fair amount of money, has been thrown at the issue of health care in the United States in the last few years, specifically in the area of electronic health records. But with all of this focus on streamlining and digitally electrifying health records, I began to wonder where did the Open Source community stand and where is its input? There is certainly a lot of money sitting out there for someone who wants to try to build the better mouse trap.
What got me thinking about this was an article in the Spring 2010 issue of 2600 that reminded me that the issue of record keeping, one of the so called linchpins of reducing a lot of the costs of health care, is not as simple an issue as you might expect. The article cites the break-in and ransom of the Commonwealth of Virginia’s Prescription Monitoring program. This is not the first time I have written about it. But for me, it highlighted one of the many issues that need to be considered when we talk about an eHealth record system. To me, this and a couple of other issues need to be highlighted to better understand the full scope of the problem.
Electronic health records need to be about more than just cost recovery. You could argue that electronic health records are already widely used. And I would not argue with you. I have records in more than my share of medical facilities. But the core purpose of these records is not about patient care, but financial recovery. Nothing made this clearer to me than when I went in for a routine procedure last year. I sat in front of a gentleman who typed in all my important data (name, insurance company, credit card number), made me sign, electronically, a Yes, I will pay form (oh, and a living will) and strapped a human readable and bar code band around my wrist and sent me on my way. While I was lying on the gurney and a procession of medical professionals came by for my physical signature on a ream of paper, each page of which went into a three ring binder, I began to wonder just how automated this all was. The answer, of course, is not very…until the bill arrived.
The health insurance payment process is a cottage industry: a very large, poorly organized cottage industry. In the course of my day job, I had the opportunity to talk with a gentleman from one of the giant heath care insurance agencies in the United States. We started talking about the entire process of processing insurance bills and the complexity of it all. The process is byzantine. A large amount of the physical number crunching is done on what can only be described as legacy systems, with input and output bolted on to them. Old fashioned EDI rules the roost, not XML or similar forms of data transformation, although they are gaining a foothold in some areas. Many of the processes, both input and output, are highly manual, prone to error and as you might expect, expensive.
Electronic health records are not just for hospitals. Like most of you, I have my fair share of doctors that I have to visit on a semi-regular basis, and I seem to add more each year. I have a stable of about eight medical professionals that look after my family including professionals, and of these eight, only one, my ophthalmologist, is completely paperless, except for the actual signature on the charge sheet. The rest have racks and racks and racks of files, filled with inches and feet of medical records. And in most of these offices, there is a dedicated person (or team of people) whose only job is dealing with insurance billing and paperwork but no one is dedicated to the files per se. What this says to me is that any new system has to be easy to use or medical offices will need to hire more people to do less medical work. One thing surprised me, reading the article in 2600. When you are injured and taken to hospital, the EMTs also have to fill in paper and as illustrated by the author, sometimes multiple copies to feed multiple, disparate and disconnected systems, to recover their funds, which means that remote devices will have to be a part of the solution as well.
So if these are some of the issues, and as we look around, are there existing models or solutions that might give us a nudge in one direction or the other? One example of how it could work is the Veterans Hospital Administration VistA system, which has won numerous awards and accolades for technology. The VistA system automates everything from patient care to benefits management to remote prescription dispensing. Doctors can share diagnostic information, images and decisions locally or through consultation across the nation. And the code is Open Source.
But as good as VistA is, it is not the ultimate solution. For starters, the implemented system uses legacy equipment and languages most people no longer program. Further, it is a closed system in terms of who it talks to and who it receives data from, which means data are secure but getting data to and from the system still requires a degree of manual sneakernet processes.
This leads us to another, different set of issues.
How do you get data into and out of the system? And do it securely? I would argue that the banking system probably has a good model, but I cannot speak to it completely or whether it would actually be a good model. Clearly, encryption, shared and public keys, and dedicated pipes would be required. And that infrastructure costs money.
How do you set about equipping, what are essentially mom and pop shops, in the form of the doctors’ offices? And find a way to easily keep the software running and up to date and doing break fix. A whole cottage industry could evolve around this--but with very thin margins. Most doctors offices are run on a shoestring, if they make money at all, so the added expense of hardware, software and the other moving parts that make up the system have to be economical. Makes cloud computing or hosted applications or whatever you want to call it--make sense in certain cases (and yes, I can think of hundreds of reasons why it should not be hosted in the cloud).
Then we have to wrestle with the standards for data interchange. That simple term, data interchange, is loaded with so many complications, permutations, combinations, and issues that the term just is not sufficient to cover the mess that needs to be addressed. And coming up with acceptable standards is going to be a long, tortured process.
In all cases, though, all the roads seem to lead to modernization. No real improvements, in terms of cost savings or speed of processing, will occur without a great deal of modernization. You will not be able to implement some of the necessary standards if the underlying OS is incapable of utilizing them (or the hardware cannot absorb the protocol). I realize it is hard to imagine, in 2010, that there are any systems out there that do not speak TCP/IP, but there are. A staggering number of them fronted by gateways of questionable stability and capability. And of course, any modernization costs money.
And have I mentioned security? In the United States, there is a series of laws, know as Health Insurance Portability and Accountability Act (HIPAA), which defines a number of (sometimes expensive) requirements to protect the data of the individual. These include but are not limited to encryption, access controls, audit controls, authentication controls. Entire monitoring systems could be developed just to ensure that HIPAA requirements are being met.
Is any of this insurmountable? Certainly not, and there are a number of areas where the Open Source community can play a valuable roll. But on reviewing the challenges, I also see why there is not more involvement. Many of the issues require specialized knowledge of a number of aspects of what is today a black box to many. There are few defined requirements and even fewer good directions. But then, is that not what is the best about the Open Source community? We see a problem and try to solve it. Perhaps the right problem has not been presented. Or maybe it is such a niche that I have not seen the strides we have made. But as the big guns of the consulting world are looking at solving this, I hope they will leverage the Open Source model, as well as solutions used elsewhere. And perhaps, just perhaps, a year-five year-from now, we will be talking about Open Source eHealth software the same way we talk about network management software or VoIP software.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Google's Abacus Project: It's All about Trust
- Seeing Red and Getting Sleep
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Fancy Tricks for Changing Numeric Base
- Secure Desktops with Qubes: Introduction
- Working with Command Arguments
- Secure Desktops with Qubes: Installation
- CentOS 6.8 Released
- The Italian Army Switches to LibreOffice
- Linux Mint 18
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide