Editorial — Assuring quality: A challenge that must be met now

When my current radiology group began its quality assurance (QA) program a few years ago, like many of my colleagues, I was at first concerned. Quality control is a very sensitive subject for most doctors. Just as it is human to make mistakes, so is the fear of being judged for them. Resistance to quality measurement is likely more pronounced for physicians, since our practices have traditionally encompassed individual decision making without much oversight.

Consequently, our group made a conscious effort to eschew poorly designed peer review, which tends to be inaccurate, arbitrary and punitive. Indeed, the program we instituted took a nuanced approach to rating studies according to their level of difficulty and the time required to read them. Crucially, the peer reviews were double-blinded, eliminating bias and concerns about damaging relationships through open disagreements with the interpretations of colleagues.

Nevertheless, when I received my first “miss,” it was both depressing and upsetting. I reopened the case I had missed and asked myself,“Why did this happen?” and “What had I been doing at that moment to cause me to miss the finding?” Was it at the end of a shift? Did I happen to read it during the Super Bowl?

Asking these types of questions, though painful for all of us, lies at the crux of a robust peer-review process. The point is not just to accurately measure quality, but also to identify the source of errors and apply that knowledge to continuously improve radiologist performance.

Indeed, every radiologist in our group now knows, with a high degree of accuracy, individual error rates across the entire enterprise. Our program’s stated purpose—to help radiologists improve their skills, not to punish poor performers—is being effectively carried out.

I relate this story because I fully believe that this can be a harbinger of good things to come for radiology on many levels. Value-based care and risk-based contracts are fast becoming a reality in the United States; they are expected to overtake fee-for-service by the year 2020.1 To thrive in this new world, I believe radiologists will need to put patient care and quality at the center of our focus, and fully embrace and actively pursue robust peer review.

For patients, peer review is the right thing to do

This challenge is easier to embrace when one considers that the glory of measuring quality far outweighs the humiliation. To patients, simply “hiring” a doctor is not unlike hiring a plumber: It doesn’t guarantee good service. The customer may only need a pipe tightened for the price of a single visit, rather than replacement of the entire pipe assembly for $350 plus the price of two visits. In many cases not even the plumber knows which option is best. The only way to find out what does or doesn’t work is to accurately measure, through a peer-review program, the diagnosis of problems and the results of treatment.2,3 The science of radiology saves lives, but much of radiology has been performed like an art. High rates of misdiagnosis in the United States suggest, at least in part, that our scientific method is mere rhetoric. The many excesses, ambiguities and imprecisions afforded by the fee-for-service model have reduced us to “trade talk” when we should be engaged in fierce dialogue about how to perform with precision and how to continuously improve the quality of our work. Deliberations about market share, reimbursements and RVUs should be more evenly balanced by discussions of how to become the best at what we do.

If we don’t embrace quality measurement, radiology will continue to diminish in effectiveness, influence and profitability. Already our authority is being undermined by decision makers informed by balance sheets: About 70% of commercially insured patients are now being managed by radiology benefits managers.4 Wouldn’t it be better to have decision making driven by quality measures?

We need to stop handicapping ourselves and start learning from our mistakes. Experiencing an error and its exposure makes repeating that error less likely in the future. A good peer-review program is like residency training: The “resident” radiologist looks at an exam, presents his finding to his or her “attending,” and learns from whatever mistakes he or she might make. Indeed, robust QA programs allow radiologists to embark on lifelong learning for the betterment of the profession and for optimum patient outcomes.

To return to the QA program at our practice, some individuals in our group had higher rates of interpretive errors in certain subspecialty areas. We were able to change the workflow profiles of those individuals so they no longer received studies in those subspecialties. We then provided a selected set of images for those radiologists to practice on, and reviewed their results. They were able to resume interpreting studies in those subspecialties after they had demonstrated improved proficiency. This non-punitive process helps patients by preventing their images from being interpreted by a radiologist who is more likely to make mistakes. It also benefits radiologists by encouraging and providing continuing education. And, finally, it supports the entire health system by eluting misdiagnosis and its potential fallout, including higher downstream costs.

Time for radiology’s “elders” to step up and lead

Before long, public and private third-party payers will require radiologists to publish performance data; everyone will know how good we are. Now is the time to prepare by developing robust peer-review programs that fairly and accurately measure radiologist performance.

Older radiologists (and I include myself in this group!) are in an ideal position to lead recent residency graduates in embracing increased oversight and quality assurance. We have a long historical view of the profession and have witnessed radiology’s rise to prominence in medicine, its increased specialization, and the advent of digital imaging and teleradiology. We have also experienced radiology’s gradual descent due to declining reimbursements, commoditization and overutilization.

Rather than dig in our heels and wish for the “good old days” or simply retire, radiology’s “elders” would be wise to lead and guide younger radiologists. We could start by applying our historic and scientific knowledge to develop robust peer-review programs—programs that will help radiologists accurately measure, continuously improve, and clearly demonstrate the value of their performance. This would propel the profession toward greater patient centeredness.

In the coming world of value-based health care, quality measures must and will reign. Whether and how we influence their trajectory is in our hands.

References

  1. The state of value-based reimbursement and the transition from volume to value in 2014. McKesson Health Solutions, 2014. [http://mhsdialogue.com/mckesson-research-reveals-the-state-of-healthcares-transformation-from-volume-to-value/#.VAnMWfldWCk]. Accessed Sept.5, 2014.
  2. Abujudeh HH, Boland GW, Kaewlai R, et al. Abdominal and pelvic computed tomography (CT) interpretation: Discrepancy rates among experienced radiologists. European Radiol. 2010;20:1952–7. [http://www.ncbi.nlm.nih.gov/pubmed/20336300]. Accessed Sept. 5, 2014.
  3. Siegle, RL, Baram, EM, Stewart, RR, et al. Rates of disagreement in imaging interpretation in a group of community hospitals. Acad Radiol. 1998; 5:148-54 [http://www.ncbi. nlm.nih.gov/pubmed/9522880]. Accessed Sept. 5, 2014.
  4. Abella HA. Radiology benefit management credited for slowing imaging growth. Diagnostic Imaging. November 12, 2009. [http://www.diagnosticimaging.com/practice-management/radiology-benefit-management-credited-slowing-imaging-growth]. Accessed Sept. 5, 2014.
© Anderson Publishing, Ltd. 2024 All rights reserved. Reproduction in whole or part without express written permission Is strictly prohibited.