Proving Your Quality of Care Compliance: A Case Study
- Mon, 2/11/13 - 11:58am
- 0 Comments
The fact that the healthcare system is on the verge of a complete overhaul is not a newsflash. For more than a decade, measurement of physician performance (ie, offering incentives to improve care) has been the focus of many governmental, professional, and private groups. However, questions remain unanswered as to what should be measured, how things should be measured, and how to get the requisite “buy-in” from physicians and other wound care providers.
A recent editorial in JAMA discusses these challenges and emphasizes some important things to remember about measuring performance: 1) Select measures that physicians can definitively impact. (Many times, healthcare systems choose measures that an individual doctor can’t affect.) 2) Make “doing the right thing”1 feasible. (Your information technology [IT] system must ensure that document can be easily supported.) 3) Consider measures that will be important to the patients and their care. (Having a specific lab value as a goal may not be the best reflection of good care.2)
When officials at Precision Health Care, a national wound center and hyperbaric management company based in Boca Raton, FL, wanted to develop a treatment algorithm for wound care, they consulted with a thought leader in the wound industry who encouraged them to “keep it simple” because many organized wound centers “miss the boat” on wound care basics – such as offloading diabetic and pressure ulcers, placing venous stasis ulcers into compression, and screening the vascular status of all patients with lower extremity ulcers. One analysis of electronic health record (EHR) data from wound centers found that only 17% of patients living with venous leg ulcers received compression.1 Why is this? It’s either due to doctors not knowing how to provide the service, or they forget to do so consistently, or they do not document when they provided such services.
Merging EHR With Best Practices
Instead of attempting to devise a complicated, multi-page algorithm for wound assessment and management, Precision officials decided to adopt a basic clinical practice guideline (CPG) that focused on vascular screening and compression for venous ulcers, vascular screening and offloading for diabetic ulcers, and nutritional screening and pressure reduction for pressure ulcers. (It was also decided to always assess hemoglobin A1c in diabetics and conduct biopsies on nonhealing wounds.) It seemed like an easy-enough plan.
Clinicians were educated on the CPGs and the importance of adhering to them to improve patient care and healing rates. The decision was also made to monitor compliance quarterly and review findings with one’s peers. The results were not considered “stellar” after the first quarter of assessment, as the average rate of compliance for any one measure was less than 50%. However, the problem was not a lack of buy-in from the providers, but a much more manageable issue — the CPGs had not yet become “routine” in their minds. To alleviate this concern, reminders to adhere to guidelines were strategically placed throughout patient-care areas and compliance evaluations began to take place on a monthly basis. As a result, CPG documentation showed improvement after one month’s time. But when that success was short-lived (at the end of the second quarter, documented compliance was actually below the first quarter), new measures were needed.
Failure to Act or to Document?
Healing rates remained in the 90th-percentile range, so clinicians were doing the appropriate things. Why, then, did the CPG documentation not reveal 100% compliance with measures? Upon further examination, clinicians were found to occasionally document CPG compliance in the incorrect area of the EHR (while manually reviewing the records, reviewers gave credit for those measures). As the JAMA article states, the IT system has to “make it easy to do the right thing.” The EHR we employed had been designed with special “macros” (computer instructions that represent a sequence of operations) that made documenting compliance with the CPGs as easy as “clicking on the box.” It was not necessary for clinicians to type anything into a “field.” “Structured language programming” within the EHR could credit the clinician with having implemented the CPG because the EHR could interpret the use of the macro. However, the automated compliance checks built into the computer could not interpret what the clinician “typed in the box.” Human reviewers manually counted and gave credit for these typewritten entries. Patient outcomes were good because clinicians were, indeed, following appropriate practices. However, clinicians were doing a poor job using the EHR designed to help them document their care. Once this issue was addressed, compliance with documentation improved significantly and matched the data seen in patient outcomes. So, it was demonstrated how well physicians were actually doing with CPG adherence.