From The Editor: Tugboats, Radios, & Our Professional Obligation to Registries

Login toDownload PDF version
Issue Number: 
Volume 12 Issue 6 - June 2018
Author(s): 
Caroline E. Fife, MD, FAAFP, CWS, FUHM

Chicago, IL — Ernest A. Codman, MD, FACS, became a surgeon at the turn of the 20th century and spent his life on a crusade to reform surgical practice and medical care. A graduate of Harvard Medical School (class of 1895), he would go on to formulate what he referred to as the end-result system, which called for following up on patients and systematically measuring their medical and surgical outcomes. Due in large part to this work, Codman is rightfully considered by many to be the father of the quality movement in healthcare. Standardizing basic practices was a novel concept in the early 1900s, but general support for the idea came from unexpected events. For instance, a legal case review of T.J. Hooper v. Northern Barge Corp., which generally came to be known as “The Radio-less Industry Standard Case,” involved two tugboats that were not equipped with radios in 1932 that lost two barges during a storm. The respective tugboat company was found negligent for failing to provide the necessary equipment (radio) when it was demonstrated that four other tugs avoided accidents because they had radios. Since then, the process of establishing standards and holding organizations accountable to them has improved the safety of industries from medicine to coal mining.

Codman also initiated one of the industry’s first true registries – a bone sarcoma registry, which he detailed in “The Registry of Bone Sarcomas as an Example of the End-Result Idea in Hospital Organization,” a paper published in 1924. To say that Codman was an advocate for seeking accountability among his clinical peers would be an understatement. Consider this excerpt: “Every hospital should trace each patient with the object of ascertaining whether the maximum benefit has been obtained and to find out if not, why not. The end-result idea merely demands that the results shall be constantly analyzed, and possible methods of improvement constantly considered. Bad results may be due to incorrect diagnoses, to lack of equipment, to errors of care, of judgment, or of skill. The end-result idea implies that the hospital should be conscious of its shortcomings, and constantly on the watch to improve its equipment and method.”  These statements from the original paper, which was reprinted as a “classic article” in 2009,1 takes me to why I was recently in Chicago attending the Future of Clinical Registries Summit convened by the Council of Medical Specialty Societies (CMSS).

In 2012, CMSS identified the establishment of registries as a physician responsibility. Today, there are 31 specialty societies with 49 registries representing an aggregate of $500 million of investment (funded by the specialty societies themselves through contributions from industry). Yes, that’s an average of $10 million per registry. There is one thing we know for certain – active use of performance comparisons from registry data improves patient quality of care. Many examples were presented from specialty societies at this summit – one of which is that of inferior vena cava filters. We all believed that they reduced complications from deep venous thrombosis. However, registry data from a collaborative quality initiative convened by Blue Cross Blue Shield of Michigan demonstrated that, in fact, they increased both morbidity and mortality. Within one year of this revelation, there was a 90% reduction in their use (and today they have nearly disappeared). It typically takes 15 years for new evidence to become standard practice, but registries married to quality-improvement programs can change practice fast, particularly when payers are involved. Interestingly, specialty societies that had registry reporting mandated as a condition of payment (like cardiology often has) called that mandate “good luck that, sadly, doesn’t last forever.”  Mandatory reporting helped get many physicians and societies past the barriers to participation. Universally, the biggest barriers faced by societies in implementing a registry are: 1) convincing doctors to accept the use of measurement; 2) how much the registry will cost the society; 3) producing relevant outcomes from patients; and 4) the burden of registry implementation. 

Here is a list of things that I learned at this summit that nearly all specialty society registries have in common: 1) they provided quality data submission to the Centers for Medicare & Medicaid Services for members; 2) they used the American Board of Internal Medicine’s Maintenance of Certification program for maintenance of board certification; 3) they developed quality measures through their qualified clinical data registry to fill gaps in the Merit-Based Incentive Payment System program; 4) they use quality measure data for benchmarking services to practitioners; 5) they use quality measure performance to develop campaigns, some of which are national in scope and may involve helping patients find the best care; 6) they at least try to monetize data via industry agreements and commercialization; 7) they publish scholarly papers on their findings; 8) they develop risk models to predict outcomes and to help guide decision-making; 9) they are dedicated to using registry data to improve patient outcomes, which means changing practice in response to data; and 10) they actively share data with payers.

That last point is very important. This isn’t just about altruism. In 2017, Blue Cross Blue Shield of Michigan began to pay a differential for quality and outcomes based on the data from its collaborative. Officials found that clinicians in the top tier of quality performance had shorter hospital stays, lower costs, and fewer adverse events. Recently, the U.S. Wound Registry demonstrated that practitioners who engage in registry reporting have 10% better healing rates for diabetic foot ulcers and venous leg ulcers than their non-reporting colleagues. I think by next year we will be able to show that they decrease time in service and cost of care as well. We plan to take that story to the payers. Dr. Timothy Ferris, MD, the Chairman and chief executive officer of the Massachusetts General Physician Organization, home of the Codman Center for Clinical Effectiveness in Surgery, concluded, “there is a professional obligation to build, maintain, and use registries. In fact, it’s an abdication of professional responsibility if a registry exists that is relevant to your practice and you don’t participate.” Registries are to medical practice what communication is to tugboats — it is not possible for practitioners to safely navigate the treacherous sea of real-world practice without a registry to monitor and improve performance. Thanks to the likes of Dr. Codman, surgeons had a head start of about 100 years. Wound care practitioners have a lot of catching up to do, but we are getting there. 

Reference

1. Codman EA. The classic: registry of bone sarcoma: part I. — twenty-five criteria for establishing the diagnosis of osteogenic sarcoma. part II. — thirteen registered cases of “five year cures” analyzed according to these criteria. Clin Orthop Relat Res. 2009; 467(11): 2771–82.