Today is the anniversary of the solving of Fermat’s last theorem. As a long recovering mathematician, these types of thing interest me so I sought a copy of the proof and began reading. The mathematics for librarians description of the proof is something like this:
- The Pythagorean theorem states that for a right triangle the sum of the squares of the sides equals the square of the hypotenuse.
- Fermat stated that the theorem only holds for a coefficient of 2, squaring, and that no other coefficient will work
- This went unproven until recently
One might have thought that the solution could be solved by brute force using a computer. How many numbers are there to be dealt with? If you approach the problem this way you’ve got to do it for infinitely many numbers. So, after you’ve done it for one, how much closer have you got? Well, there’s still infinitely many left. After you’ve done it for a thousand numbers, how many, how much closer have you got? Well, there’s still infinitely many left. After you’ve done it for a million, well, there’s still infinitely many left. In fact, you haven’t done very many, have you? In fact, using this approach, you’ll never finish. This got me thinking about our EHR system.
I think something has been lost in the confusion about a national EHR system. After all, that’s the target right, a national system? We only unleash the power of EHR if we are able to make it work out outside of the provider’s four walls. Is it possible that perhaps the logic of how we have been viewing developing a solution for the problem is wrong? I think it is. Since the outset, the problem has been defined as how do we develop a system that will enable us to get everyone’s health records (let’s call an individual record A) to some arbitrary set of healthcare providers, call them P. There are some 350 million A’s and for simplicity let’s agree that there are 100,000 P’s. So now, the system to which everyone is working is the system that will enable all of the A’s to get to any combination of P’s.
See? Now what happens if we place a few hundred Rhios and health information exchanges (HIEs) in between the A’s and the P’s? Let’s label them G’s for gatekeepers. So, in the current framework all the A’s (everybody’s health records) have to pass through all the G’s, make it up to the national network, then back through all the G’s and then sorted through all the P’s to the correct P.
How can we know this design will work for every possibility? The only way is to test every combination of A’s, G’s and P’s. It’s a difficult problem. It becomes more difficult when we acknowledge that there are hundreds of EHR vendors supplying software to all of those P’s. Many of those P’s will have modified the software, meaning that there are probably thousands of variations of EHR systems. Oh, and did I mention that all of this is being done without any single set of standards? That means my stuff will look different from your stuff, and the G’s will have to move different stuff, and from an “IT” perspective the EHRs at the end of the food chain will have to interpret different stuff and then update your stuff with their stuff. That’s a lot of stuff.
So, if that is where things are, what can be done about it? My take on a solution is that the problem with this model lies with the word in italics, ‘everyone’. Every possible patient with every possible need getting to every possible provider. How to solve this or at least simplify the magnitude of the problem? One possible solution is to build out the EHR system and the network such that one patient’s record can go to one provider and have that record updated. Would it not make more sense to build it for a single patient, create a universal patient record (UPR) that can handle all instances? Do it right once. Prove that it works and then replicate it instead of building millions of different ones and hoping they work?