Measuring Quality in Wound Care

Login toDownload PDF version
Issue Number: 
Volume 7 Issue 1 - January/February 2013
Caroline E. Fife, MD, FAAFP, CWS

Editor’s Note: This article serves as a follow-up to Dr. Fife’s article:
“The Changing Face of Wound Care: Measuring Quality” published in October 2012. Dr. Fife shares an affiliation with Intellicure Inc. and US Wound Registry.

  Healthcare reform has major implications for the wound care industry. Transitioning to a value-driven payment model that focuses on better care at lower cost necessitates reconsideration of physician and hospital financial incentives. As a way to improve patient care, measure the benefits of specific interventions, and incentivize clinicians for providing them, there is a national initiative to develop and report specific clinical “quality measures.” Previously, I explained the various incentives (and penalties) that apply and detailed those measures relevant to wound care physicians under the Physician Quality Reporting System (PQRS) — formerly the Physician Quality Reporting Initiative (PQRI).1 Wound care clinicians are glad to have some measures to report, even if it is not clear how well these particular measures will improve the outcomes of patients living with chronic wounds. We are going to need more quality measures in wound care, particularly measures that can be reported directly from electronic health records (EHRs) since the Center for Medicare and Medicaid Services (CMS) intend for all data to be reported this way — meaning wound care practitioners must decide what constitutes quality care and how it is to be measured … before it’s decided for us.

How Quality Data Gets Reported

  More than 40 years ago, the late Avedis Donabedian proposed models for measuring almost every aspect of quality in healthcare. Widely regarded as the first to formally study healthcare quality, Donabedian defined an “outcome” as a change in an individual as a result of the care received. Patient outcomes can be characterized by clinical endpoints (eg, amputation), functional status (eg, ambulation), or general well-being (eg, pain). It is also possible to measure the appropriateness of clinical interventions that are known to improve a desired outcome (eg, diabetic-foot offloading). Although CMS prefers outcome measures to “process” measures, assessing compliance with clinical practice guidelines such as offloading may be easier to determine than a more subjective clinical endpoint such as “wound healing.” Most medical societies (ie, the American Medical Association’s Physician Consortium for Performance Improvement® and the National Quality Forum [NQF]) have spent years developing quality measures for the majority of specialties and major disease states.

  One of the nation’s first quality incentive programs, the PQRS (also known as pay for performance) began in 2007. Since the use of EHRs was not widespread at the time, reporting was facilitated via qualified patient registries that reviewed data via claim forms before transmitting it to CMS on behalf of eligible providers. The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 mandated the adoption of certified EHRs, changing the dynamics of the reporting process. HITECH also made stimulus money available to clinicians who demonstrated the “Meaningful Use” of their EHRs through a number of metrics including participation in quality reporting. To make electronic data-sharing possible, all certified EHRs must use Health Level 7 Clinical Document Architecture, which consists of a mandatory “textural” component (to ensure human interpretation of the document) and “structured” components for software processing, which allows data to be shared. In order for eligible providers to obtain their HITECH-adoption bonus money, they must meet certain program requirements that are still being developed by CMS. Under stage II of Meaningful Use (beginning in 2014), providers must share data with a public health agency or a specialty registry by transmitting “directly from an EHR.” Accomplishing this will require transfer of structured data (not “free-text typing” or dictated notes). Medicare is also driving the PQRS program toward transmission of quality data directly from EHR to CMS via “e-Measures.” The clinician will transmit clinical performance data to CMS, and CMS will calculate the “pass” or “fail” rate of the quality measure to determine subsequent bonus (or penalty) payments. So, while EHRs may change the method of reporting, the real question remains: What, exactly, are we going to report in order to demonstrate wound care quality?

A Real Example in Wound Care

  As executive director of the US Wound Registry (USWR), a nonprofit organization that has been a CMS-approved patient data registry since the PQRS launched, I have been involved in performance reporting for four years. Registry responsibilities include: validating eligible providers (EPs), collecting medical data needed for EP reporting measures, acquiring attestation from EPs (permission from the clinician to report data to CMS), calculating measures (including de-identifying data), and transmitting secure data to CMS. PQRS data submission is a daunting task, particularly since some measures are highly complex to calculate. Initially, there were no PQRS measures directly relevant to wound care physicians, but by 2009 one of the 153 measures pertained to wound care specifically — “percentage of patients over age 18 with a diagnosis of venous ulcer who were prescribed compression therapy within the 12-month reporting period.”

  To receive bonus pay, clinicians had to successfully report at least three measures. Specialists like cardiologists and oncologists didn’t have much difficulty because their medical societies worked hard to create several relevant quality measures. Unfortunately, wound care providers had to report at least two other measures that were not directly relevant to their practice (eg, inquiring about tobacco use or body mass index screening).

  In 2009, the bonus for reporting was an additional 2% of a provider’s total annual Medicare billing. After the USWR submitted the physician’s PQRS report to CMS, CMS went through its own validation process before mailing a check to the provider. From 2008-10, the USWR offered free reporting services to clinicians using the Intellicure EHR. Remember, PQRS was really a “pay for reporting” program because it was not necessary for clinicians to actually pass measures. They simply had to successfully report them. Furthermore, while most US clinicians had at least some additional documentation burden to participate in PQRS, wound care doctors for whom we reported did no additional work to report measures because the necessary documentation was incorporated into their EHR and the data was abstracted directly from the EHR. However, an estimated 20% of clinicians eligible to report through the USWR refused to participate. When we later inquired as to their reasons, common themes were: 1) not wanting the government to “watch their practice,” 2) lack of knowledge regarding pay for performance, 3) inability to get the paperwork faxed on time, and 4) an employment situation whereby another entity would get the bonus money, thus removing all incentive to report. In 2013, the bonus has been reduced to only 0.5% of total Medicare billing as the PQRS program transitions into its penalty phase. Come 2015, physicians will lose 1% of Medicare revenue for not reporting, increasing to a 2% reduction in 2016 and a 3% reduction for non-reporters in 2017. If the “carrot” of a 2% bonus failed to entice some wound care clinicians to report in 2009, will these penalties be enough to spark them, or is the crux of the problem the quality measures themselves?

Quality Reporting Concerns

  It may be worth going through the details of how the venous ulcer measure is reported as an example of what can go wrong in quality reporting. The venous ulcer measure had excellent, well-written supporting materials detailing the evidence base for compression in the healing of venous ulcers and clearly defining “adequate” compression (eg, multilayer bandages, Unna’s boot, 30 mmHg stockings, etc.). However, the “measure” itself (the rules that determine how information is reported) allows any type of compression to pass (eg, T.E.D. hose, AceTM bandages), and the compression only has to be prescribed once in a 12-month period. While wound care clinicians realize how faulty this is as a measure of “quality care,” when the measure was created it was not yet feasible to specify the specific type of compression and/or track the frequency of its provision since most clinicians were still using paper charts. To report the venous ulcer measure in 2009, the USWR began by identifying patients insured by Medicare Part B and who were billed for an evaluation and management service between Jan. 1 and Dec. 31 (a patient’s age had to be older than 18). Data were next queried for ICD-9 diagnosis codes indicating venous disease. Here, we encountered another problem with the venous ulcer measure design. The measure as written omitted the “454.x” diagnosis codes (one of the most commonly used for venous stasis ulcers), potentially excluding the majority of patients in a practice from reporting. However, CMS allowed registries the leeway to identify the target diagnosis even if the code specifications in a measure were flawed. Thus, despite the flaw in the way the measure was written, all venous ulcer ICD-9 diagnoses were identifiable by the USWR, and the above formula provided the “denominator” of the measure. The “numerator” was determined as follows: During the year in question, was compression therapy prescribed at least once? If so, the clinician “passed” the measure. The PQRS reporting process acknowledges that a justifiable reason might exist for not performing a therapeutic intervention. There may have been a medical reason for not providing compression (eg, concern over arterial status), the patient may have refused compression, or there might have been a “system’s reason” (eg, certain supplies were unavailable). If a justification was provided, the measure passed. If no compression was prescribed at any visit during the one-year period without a documented reason for omission, the measure failed. In 2009 (and in all subsequent years) all wound care physicians reporting PQRS data through the USWR passed the venous ulcer measure. In other words, every wound care clinician practiced “quality venous care” if the definition of “quality” is “to provide any type of wrap once.

Does A Measure “Gap” Exist?

  At the USWR, we wanted to determine the passing rate of a venous ulcer measure had it been better designed to assess real quality. After calculating the measure as written, we used the same data to calculate the pass rate of the same physicians on the same patients. Had the measure been written as we thought it should be — ie, we ran queries to determine whether the patient had been provided adequate compression (such as multilayer bandages, Unna’s boots, 30 mmHg stockings, etc.) at each visit. Evaluating clinical performance in this manner, we found that patients living with venous ulcers were discharged from outpatient wound centers in adequate compression only 17% of the time.2 So, (even when allowing for appropriate justifications for not providing compression at any given visit), if we created a venous quality measure defined as adequate compression at each visit, the measure would be much harder to pass given current practice behavior. However, while few might achieve 100% compliance, the “passing score” for the measure is 80%.

  There are positives and negatives to this from a “measure development” standpoint. If a measure is designed in such a way that it fails to measure quality in any real sense (like the current venous measure), then it is a failure because it doesn’t measure whether appropriate care was ever provided. However, if a measure is designed in such a way that it is either too difficult to report or too difficult to pass, it becomes unusable and thus fails for a different set of reasons.

How Would Wound Care Measures Be Developed?

  The current venous ulcer measure is going to be “retired” after 2013 since no measure sponsor has indicated a willingness to provide the mandatory testing. So, as of 2014 there will be no measure of quality in the treatment of a venous ulcer unless the wound care industry decides to create and test a new one. Proposed measures must undergo rigorous evaluation. The NQF provides detailed information on the measures’ testing process and several conditions must be met before NQF will grant consideration. If all conditions for consideration are met, candidate measures are evaluated for their suitability based on four sets of standardized criteria (in the following progressive order): importance to measure and report, scientific acceptability of measure properties, usability, and feasibility (

  Here’s the most frustrating part: The NQF only considers measures submitted in response to one of their “calls” for measures. We have spent three years looking for an NQF category that could logically have wound care measures placed within it, but have been turned down each time.

  The testing process can take more than a year and is evolving because it was previously necessary to ensure that measures could be reported equally well via paper forms or via a registry. With the advent of electronic reporting, new types of measures are feasible.

  If a replacement venous ulcer measure were developed, it would first need to go through the above NQF process. If approved, it would then begin the process of becoming an electronic measure. The process of “e-Measure” development is, in essence, the creation of the specifications of the computer query that enables the transmission of clinical data directly from the physician’s EHR to CMS. This is an expensive development process that requires its own rigorous testing, and specialty societies spend large amounts of financial and personal resources to shepherd measures through the endorsement process. It is not clear where financial resources would come from in the wound care industry. What is at stake is not the loss of 3% of Medicare billing for each practitioner under PQRS for “non-reporters.” The Affordable Care Act mandates that in 2015, a substantial portion of a hospital’s revenue be linked to the reporting of quality measures. The wound care industry needs a collaborative approach to the development of and testing of quality measures so that it will not be left behind in the transition to a value-based healthcare system.

Caroline E. Fife is co-editor of TWC.


1. Fife CE. The changing face of wound care: measuring quality. Today’s Wound Clinic. 2012;8:10-14.

2. Fife CE, Carter MJ, Walker D. Why is it so hard to do the right thing in wound care? Wound Rep Reg. 2010;18:154–158.

How EHRs Impact Quality Measures – Online Exclusive

  Stage II of the Health Information Technology for Economic and Clinical Health Act’s Stage Meaningful Use incentives require clinicians to implement clinical decision-support rules relevant to their specialty and to prove one’s ability to track compliance within the rule. Many studies have shown that physician implementation of complex clinical practice guidelines (CPGs) is poor because there are too many decision points and the CPGs are not available at the point of care. You can read more about one facility’s experiences in adopting new CPGs in the January/February 2013 issue of Today’s Wound Clinic. Initially physicians were not fully engaged in the process until a more productive electronic health record (HER) documentation system was installed.

  Ultimately, the physicians involved in the initiative began to receive weekly quality reports for diabetic patients and adherence to vascular screening and offloading. Importantly, measures selected were evidence-based, relevant to the patient, and under the direct control of the clinician. Documentation was done at the point of care in a specialty-specific EHR that used structured language for data entry, making it easy for physicians to document what they did (and why they did not) perform an intervention at the point of care. Structured language facilitated the automated reporting features that provided the ability to track compliance in real time. Thus, clinicians could get feedback on “missed opportunities” for quality almost immediately using reports that allowed them to find the names of patients who needed quality interventions. If these measures were undergoing formal development testing for National Quality Forum endorsement, the next step would be to evaluate the validity of the measure by determining whether high performance leads to better patient outcomes.

Charting a Way Forward

  Naturally, there are many possible wound care quality measures. The Affordable Care Act mandates that starting in 2015, a substantial portion of a hospital’s revenue be linked to the reporting of quality measures. It is likely that in the near future, physician revenue will be similarly linked to quality metrics. The wound care industry needs a collaborative approach to the selection, creation, validation, and testing of an entire suite of quality measures (eventually electronically reported) so that we will not be left behind in the transition to a value-based healthcare system.