It’s an old management axiom: “You can’t improve what you can’t measure.” And it’s true. How can you improve something, allocate resources effectively, compare with peers, or even know if quality is increasing or decreasing if you’re talking about something you haven’t measured?
This concept is very relevant to much of the current thinking surrounding EHRs and healthcare quality initiatives. The federal government has invested billions of dollars, private investors are shoveling cash at health IT, medical offices are running around like mad people trying to implement the right technology. But is it all worth it?
A new analysis from the RAND Corporation that was published last week in the New England Journal of Medicine suggests that we may not really know for some time because current administrative processes haven’t kept pace with developments in the delivery of care. Standard billing and coding operations simply aren’t up to the challenge of measuring the effect of technology on healthcare, and it will likely take a good deal more innovation to get the point where we can measure the effect.
The question goes to the heart of the meaningful use program. Lawmakers and taxpayers alike want to see rapid returns from their investments, which have been sizeable. However, even if care does become more efficient and productive in actuality, the gains may not be apparent when looking at numbers and study findings, which could cause a budget-conscious public to pull back the reins on health IT investment.
The authors of the report say we have seen this all before. As computers started infiltrating other industry decades ago researchers started observing what became known as the “IT productivity paradox.” Workers were able to get far more done than ever before, but if you looked at only the data it appeared that they were less productive. Why? Measures had not kept up.
See on ehrintelligence.com