Radiology Reporting, Changes Worth Making Are Never Easy

By Gary H. Danton, MD, PhD
pdf path

Dr. Danton is Chief of Imaging Informatics, Department of Radiology, University of Miami Miller School of Medicine, Miami, FL

Reflecting on how radiology has changed since the late 1800s is a daunting task because so many aspects are different. We can reflecton the improvement of film radiographs, digitalization and the advent of computed tomography (CT), magnetic resonance imaging (MRI), molecular imaging and picture archiving and communications systems (PACS). The list goes on and on. Instead, it would be far easier to list what has not changed and surprisingly, leading the list is the structure of our radiology reports.1

Granted, we’re no longer scribbling our notes on paper cards by hand or sketching the images into a report like an exotic charcoal artist (for more on that comparison, see Dr. Weiss’ excellent editorial on the “art of radiology reporting” on page 6 of this issue). The general format of our prose reports is unchanged. We comment on the technique, describe our findings as we would speak them, list some limitations and give a summary impression. The structure and format varies from individual to individual, group to group andstudy by study. But, if it works, why must everything change?

It is easy to pick up a dictation phone and create a report by free association, and in a matter of seconds, the report has ended and passed to the black box of transcription. For better or worse, the demands of our patients, referring physicians, hospital administrators and payers are forcing radiologists to update our practices. As with most changes, these can be difficult, if not painful, to implement and we must accept that by adapting, our field will be stronger as we provide better, more consistent patient care and a higher qualityproduct to our referrers.

This article will review arguments for changing the means by which we generate a report and describe a variety of structural and content changes.

Why change?

If we accept the premise that changing dictation methodology and style is not for the convenience of radiologists, it begs the question, “Why change?” The impetus to change comes from referring physicians, our business managers and hospitals, and to a lesser extent, our patients. Long turnaround times (TATs) are a source of disappointment and frustration for all our customers. It is becomingcommon to expect at least a 24 hour TAT for outpatients, a 4 hour TAT for inpatients and 1 hour for emergencies.2

These TATs can be difficult to achieve with traditional transcriptionists and radiologist final report editing. There are also growing expectations for particular styles of radiology reporting. In 1988, surveys of referring physicians found that while most clinicians were happy with most reports, rating them an 8 out of 10, 49% complained that reports sometimes did not address the clinical question and 40% thought that reports were sometimes confusing.3 Prose reporting vs. itemized reporting has also been studied using physician-preference surveys. A prose report describes the findings using standard paragraphs as if the radiologist is speaking to the referring physician. An itemized report separates the findings into shorter phrases and lists. Naik et al. reported a study of this kind that evaluated ultrasound reports in 2001. Eighty-six percent of clinicians and 64% of radiologists preferred itemized reports to prose reports. Only 2% of radiologists preferred prose with the remainder having no preference. When stratified for level of detail, between 76% to 86% of physicians preferred a detailed, itemized report.4 Hospital administrators and referring physicians are empowered to be more demanding since it is no longer necessary to have an in-house radiologist for diagnostic reporting. If a competitor teleradiology group contracts to produce reports more quickly and with a standardized structure, hospitals and referrers may be willing to change radiologists.

Patients have become more involved with managing their care due to the availability of medical information on the internet and digital medical records. Patients are now viewing their own studies on CDs at home. They are gradually becoming the repository of their medical record as they switch providers and hospitals with some frequency. Electronic medical record systems, such as those provided by Epic (Verona, WI), have implemented personal health records that allow patients to view their medical records electronically from the host institution. LifeIMAGE (Newton, MA) software and hardware permits patients to view and control access to their radiology images. They can collect their images from a group of sources, upload and share them with providers. Google Health (Mountain View,CA) and Microsoft HealthVault (Redmond, WA) also provide solutions for patient control of their medical record independent of specific institutions. Radiologists will have to be aware that patients will be reading and trying to understand our reports.

Pay-for-performance initiatives also favor changes in our reporting systems. The American College of Radiology (ACR) and the Centers for Medicare & Medicaid Services (CMS) have both expressed their support for tying reimbursement to the quality of medical reporting. The 2006 Tax Relief and Health Care Act (TRHCA) required that a physician quality reporting system be established. In response, CMS began the Physician Quality Reporting Initiative (PQRI). It is currently a voluntary program where additional reimbursement (equal to 2% of total estimated Medicare Part B Physician Fee Schedule) is offered to eligible providers who follow reporting performance measures. There are 5 performance measures applicable to diagnostic radiology and about 8 measures that may apply to interventional radiology and radiation oncology.

Diagnostic radiology performance measures include TAT, quantitative reporting and they list specific findings for certain reports. Measure #10 requires documentation of absence or presence of hemorrhage, mass lesion and acute infarction in either CT or MRI for patients with symptoms or diagnosis of stroke. The in-hospital completed report must be available within 24 hours of arrival. Measure #145 requires exposure time be reported for fluoroscopy procedures. Measure #146 examines the percentage of final reports for screening mammograms classified as BI-RADS 3 (probably benign) with the idea that this category is used too often. Measure #147 requires that patients undergoing bone scintigraphy have appropriate documentation, correlating scintigraphic results with other relevant imaging studies such as CT, X-rayor MRI. Measure #195 replaced #11 and requires all patients with carotid imaging studies to have reference measurements of the distal internal carotid diameter to compare with areas of stenosis.

While these initiatives are currently voluntary and have not been broadly implemented, it is not difficult to imagine them as requirements for reimbursement in the future. In the very least, the numbers of performance measures will grow over time. Radiologists may have difficulty remembering what must be included in each exam to satisfy the requirements. Changing our reporting systems will help us achieve these goals.

Radiology reports contain a wealth of data that could be used for research purposes if it were easily extractable. Unfortunately, the traditional prose report is difficult to electronically data-mine because of differences in the way radiologists report results, construct phrases and vary terminology. Sobel et al. studied chest radiographic reports of 822 patients in 297 hospitals. They documented numerous synonyms for a variety of diagnoses including 24 terms for pulmonary vascular congestion and 30 phrases for reporting that an abnormality may or may not be present.5 Large numbers of synonyms and styles confuse referring physicians in addition to limiting data-miningabilities. A recent study of CT angiography reports for pulmonary embolus found that the way statements of limitations were phrased often confused the diagnostic conclusion. They recommended setting reporting standards for these reports.6

The preceding paragraphs argued that radiology reporting systems will need to change from the transcribed prose report to something different that allows faster TAT, a more unified structure, and a standardized vocabulary with greater appeal to referring physicians and patients. The following sections will address informatics solutions for these issues, including voice recognition, structured reporting,lexicons, quantitative,and patient-centered reporting.

Voice recognition

“Wow, this is the greatest software I have ever used and I wish we implemented this sooner.” This is a phrase that has probably never been uttered during the first week of voice recognition (VR) software implementation, except perhaps by the most technology savvy radiologists. The more commonly heard phrases would be inappropriate to publish in a reputable journal. With time, however, VR systems are received with mixed reviews among radiologists but applause from hospital and practice managers who view the shift of report creation from transcription to the radiologist as a cost savings with improved TAT.While there is general agreement that VR reduces2,7,8 few publications report actual numbers. TAT reductions are reported as high as 90%.8 Ramaswamy et al. analyzed 4552 reports prior to and 5072 reports after implementation of VR. In their study, mean MRI report TAT decreased by about 50%. Report length decreased, spelling errors decreased, but spacing errors increased.9 McGurk et al. also reported higher error rates in VR generated reports.10 Some authors speculate that if productivity suffers because radiologists spend more time creating and editing reports, financial gains may be lost. A mathematical model was proposed to help practices analyze costs and benefits.11 An intermediate solution is VR with transcriptionist editing. A transcriptionist listens to the report and edits the product of the VR system. The edited version is then queued for signing. TATs and transcription fees are less than full transcription and many radiologists feel more comfortable with the system at least as an intermediate step to full radiologist editing.

The successful implementation of VR systems relies on managing radiologist expectations, extensive use and development of templates and macros, as well as training. Groups should plan for at least a short-term decrease in productivity (or longer hours) as radiologists adapt to the new software. VR software consists of 2 basic parts, the speech-to-text engine and a workflow manager. A great source of frustration for radiologists new to VR is the speech-to-text portion—a basic understanding of VR is helpful to fully optimize the software. Speech recognition engines work via 2 algorithmic models. The acoustic-phonetic model divides language into groups of sounds (phonemes). Combinations of sounds are associated with a particular word. The language model12 further narrows the correct word by accounting for the entire phrase, or the words preceding the phoneme in question. A classic example is the sentence, “There is a mass in the right upper ____.” The blank could be extremity, quadrant, or lobe but is probably not brain. Understanding this principle explains why it is helpful to correct the VR software using phrases and not individual words. Continually repeating the same single word is less likely to yield a useful result than repeating a short phrase including the difficult word.

Users must understand that the VR system can adapt to them over time if the correction features are repeatedly used, but the system is unlikely to be perfect. Occasionally, users will need to adapt their dictating styles to suit the software. Environmental factors may also impact VR performance. Reading rooms should be quiet or have white noise from either a fan or a white-noise generator that drowns out sudden sounds like footsteps. Carpeting and soundproofing are also beneficial. The use of a microphone headset that keeps the microphone a stable distance from the mouth may also improve user experiences.13

RadLex

Medical terms have many synonyms and using multiple different terms confuses physicians and patients who read reports. Multiple terminologies limit computer-aided radiology report research strategies, hamper teaching-file organization and make searching publication libraries such as PubMed less efficient. Even radiology billing could be improved by using standard terminologies and phrases identifiable by proofreaders or coding software. Having a radiology lexicon that standardizes vocabulary became a goal of the Radiological Society of North America (RSNA). Evaluation of existing medical lexicons by Langlotz and Caldwell in 2002 concluded that lexicons such as SNOMED and the Unified Medical Language System (UMLS) were insufficient for radiology needs.14 RSNA took up the challenge and began developing RadLex. As RadLex grew, its complexity necessitated its development into an ontology that not only defines terms but classifies relationships between terms. For example, if I were to electronically search text from a set of radiology reports for glioma, I would want the search results to include glioblastoma and oligodendroglioma. This is accomplished by a defined association between a term (glioma) and its subsets (glioblastoma and oligodendroglioma). An ontology framework allows software developers, engineers and users to more easily use RadLex in development projects.15,16 RadLex is provided free from the RSNA and terms can be searched online at http://radlex.org/viewer.17 Radiologists developing and implementing templates for dictation are encouraged to use RadLex terminologies. Software engineers designing VR and structured reporting systems should look to RadLex to create their basic templates and phrases.

RadLex goes beyond defining and associating pathological and anatomical terms. It defines relationship words and phrases such as “part of,”“contained in” or “mimics another entity.” Image-quality terms such as diagnostic quality, exemplary quality and nondiagnostic quality can be found. The RadLex playbook was developed by committees specifically devoted to imaging technique including devices, exams and procedure steps. Each term is classified and organized regardless of how simple or complex it intuitively seems. This complex radiology lexicon will likely form the basis of structured reporting in years to come.

Structured reporting

Structured reporting has been characterized as having 3 features: headings, such as history and findings must be consistent; the report is itemized, with short, descriptive terms, and; the report uses a standard lexicon with codified terms.13 VR and RadLex will enable development of efficient structured reporting systems. Many current structured reporting systems use a click-based approach where reports are constructed by pointing the mouse at a series of options. These systems are most commonly used for mammography, but are available for the entire body. Unfortunately, the large number of clicks to generate a report can be discouraging and requires the radiologist to spend time looking at the reporting screen and away from the images. An ideal structured reporting system allows radiologists to keep their eye on the image as much as possible with little glancing away at the dictation. Once the dictation technique is learned, structured reports can be standardized throughout a practice or institution and can be designed to include information required to maximize billing, decrease TAT and improve readability. With codified terms that can be catalogued, research institutions will be ableto efficiently extract data from accumulated patient reports.

Templates and macros can be combined in one report so both normal and abnormal findings are included. Having only normal templates available is not an efficient use of VR technology. As a starting point, RSNA compiled a list of 68 report templates available on their Website for free and other templates are being planned.18 Besides structural consistency, templates may include pertinent reference material. Asurvey of general practitioners in Britain complained they were unfamiliar with the normal sizes of organs listed in radiology ultrasound reports and the significance of the listed measurements. In that report, 62% of respondents preferred not having measurements of normal organs listed at all.19 A structured reporting system could automatically populate the normal ranges and even have a predefined description of their significance, without the radiologist having to remember numbers or take the time to dictate these features. Common references that may help clinicians take an evidence-based approach to the radiologist’s recommendations can be included, such as Fleischner criteria for pulmonary nodules20 or data regarding adrenal incidentalomas.21

Structured reporting is not limited to radiologist data entry. Information commonly found in the radiology report can be automated. Patient demographics and the reasons for the exam can be input by the radiology information system (RIS) since that information is already used for scheduling. Data from comparisons could be derived directly from the PACS. Measurements can populate a predefined report directly from software applications such as PACS or advanced visualization software, such as that offered by TeraRecon (San Mateo, CA). Structured cardiac CT reports automatically populate diagrams of the coronary arteries; similar technologies could be developed for other pathologies. Single images can be exported to the report documents, illustrating findings to the clinicians. Kurdziel and Hopper explored the use of a system that video recorded the radiologist explaining images and pointing out the findings. About 97% of survey respondents reported a preference for this type of multimedia report instead of their current system.22

Patient-centered reporting

As patients take a greater role managing their information and reading radiology reports, these reports may be structured to cater to them. A macro for renal cyst could include a separate paragraph at the end of the report with a description and explanation for the patient. Text in electronic reports could be hyperlinked to institutional or national Web sites to explain the terms to both patients and referring physicians. Practices may find such a system is a valuable marketing tool. One might envision a Web site with a picture of one of the radiologists introducing themselves as the patient’s physician and a video or written description of a common term. For example, radiologists and clinicians may not spend time worrying about subsegmental atelectasis but it is not hard to imagine patients calling their family doctor to explain this bizarre-sounding finding. Developing patient-centered systems could facilitate the ACR’s goal to introduce the radiologist to patients as their physician.

Quantitative-reporting and evidence-based medicine

The premise of evidence-based medicine establishes that clinical decisions and diagnosis be based on concepts that are tested by the scientific method. Imaging findings analyzed with qualitative descriptions and conclusions can be utilized in evidence-based models. Whether atest is ultimately positive or negative fits easily into the Likelihood Ratio Nomogram. Data on sensitivities, specificities and likelihood ratios could be populated into reports, aiding the clinician’s decision making. Quantitative imaging goes a step further by providing specific evidence-based measurements upon which clinicians can base their decisions. Radiology reports have become more qualitative as quantitative reports seemed anecdotally to offer no additional benefit. But is it enough to say for example that lymphoma is progressing? Are 2 orthogonal measurements sufficient to assess the tumor size or does a more quantitative approach such as volumetric measurement offer some benefit? If so, how do we validate volumetric measurements and ensure consistency between software? Answers to these questions are a focus of current research. If radiologists are going to devote the time to make specific measurements and perform 3-dimensional reconstructions to generate exact volumes, the advantages of this data should be established. Molecular imaging techniques however, require quantitative assessments because the presence of a molecular marker may be normal but changes in marker expression could herald disease.

A number of initiatives explore the advantages of quantitative imaging. The Quantitative Imaging Network was founded in 2008 with the goal of developing quantitative measurements of response to therapy in a series of clinical trials.23 The Toward a Quantitative Imaging (TQI)group is focused on advancing and promoting quantitative imaging and the use of imaging biomarkers.24 At the 2009 RSNA Annual Meeting and Scientific Assembly, an exhibit entitled Toward Quantitative Imaging: The Reading Room of the Future highlighted industry and academic partnerships advancing the field. Another group, the Quantitative Imaging Biomarkers Alliance (QIBA), has a goal to improve the value of biomarkers by reducing variability between studies. Volumetric CT, dynamic contrast-enhanced MRI and positron emission tomography with18 F-fluorodeoxyglucose were all recent topics at QIBA meetings.25

Conclusion

Changing the way radiology reports are structured requires acceptance and adaptation among radiologists. Adapting will not be easy but the results will make for faster, more consistent radiology reports that are amenable to research data mining. A number of organizations sponsor programs to foster structured reporting and our industry partners will continue to work toward developing systems to simplify amassing and reporting these data.

Further information

Additional details about the ACR and CMS initiatives to link quality measures to radiology reporting can be found at http://www.acr.org/Secondary MainMenuCat gories/quality_safety/p4p.aspx and at http://www.cms.hhs.gov/PQRI/15_MeasuresCodes.asp.

REFERENCES

  1. Knight N, Reiner BI. Reinventing the radiology report: Part 1, a history. Imaging Economics. 2004;http://www.imagingeconomics.com/issue/articles/2004-11_10.asp.Accessed online: April 6, 2010.
  2. Boland GW, Guimaraes AS, Mueller PR. Radiology report turnaround: expectations and solutions. Eur Radiol. 2008;18:1326–8.
  3. Clinger NJ, Hunter TB, Hillman BJ. Radiology reporting: Attitudes of referring physicians. Radiology. 1988;169:825–826.
  4. Naik SS, Hanbidge A, Wilson SR. Radiology reports: Examining radiologist and clinician preferences regarding style and content. AJR Am J Roentgenol. 2001;176:591–598.
  5. Sobel JL, Pearson ML, Gross K, et al. Information content and clarity of radiologists’ reports for chest radiography. Acad Radiol. 1996;3:709–717.
  6. Abujudeh HH, Kaewlai R, Farsad K, et al. Computed tomography pulmonary angiography: An assessment of the radiology report. Acad Radiol. 2009;16:1309–1315.
  7. Reiner BI, Siegel E. Reinventing the radiology report: Part 2, time to adapt. Imaging Economics. 2004;http://www.imagingeconomics.com/issues/articles/2004-12_04.asp. Accessedonline: April 6, 2010.
  8. Dreyer KJ. PACS: A guide to the digital revolution. 2nd ed. New York: Springer; 2006.
  9. Ramaswamy MR, Chaljub G, Esch O, et al. Continuous speech recognition in MR imaging reporting: Advantages, disadvantages, and impact. AJR Am J Roentgenol. 2000;174:617–622.
  10. McGurk S, Brauer K, Macfarlane TV, Duncan KA. The effect of voice recognition software on comparative error rates in radiology reports. Br J Radiol. 2008;81:767–770.
  11. Reinus WR. Economics of radiology report editing using voice recognition technology. J Am Coll Radiol. 2007;4:890–894.
  12. Eng J, Eisner JM. Informatics in Radiology (infoRAD): Radiology report entry with automatic phrase completion driven by language modeling. Radiographics. 2004;24:1493–1501.
  13. Society for Imaging Informatics in Medicine. Practical imaging informatics: Foundations and applications for PACS professionals. 2009. New York, NY; Springer.
  14. Langlotz CP, Caldwell SA. The completeness of existing lexicons for representing radiology report information. J Digit Imaging. 2002;15 Suppl 1:201–205.
  15. Rubin DL. Creating and curating a terminology for radiology: Ontology modeling and analysis. J Digit Imaging. 2008;21:355–362.
  16. Rubin DL, Noy NF, Musen MA. Protege: A tool for managing and using terminology in radiology applications. J Digit Imaging. 2007;20 Suppl 1:34–46.
  17. RSNA Informatics RadLex. Radiological Society of North America. 2009. http://radlex.org/ viewer. Accessed online: April 6, 2010.
  18. RSNA Informatics Reporting. Radiological Society of North America. 2010. http://www.rsna.org/ Informatics/radreports.cfm. Accessed online: April 6, 2010.
  19. Grieve FM, Plumb AA, Khan SH. Radiology reporting: A general practitioner’s perspective. Br J Radiol. 2010;83:17–22.
  20. MacMahon H, Austin JH, Gamsu G, et al. Guidelines for management of small pulmonary nodules detected on CT scans: A statement from the Fleischner Society.Radiology. 2005;237: 395–400.
  21. NIH state-of-the-science statement on management of the clinically inapparent adrenal mass (“incidentaloma”). NIH Consens State Sci Statements. 2002;19:1–25.
  22. Kurdziel KA, Hopper KD, Zaidel M, Zukoski MJ. “Robo-Rad:” An inexpensive user-friendly multimedia report system for radiology. Telemed J. 1996;2:123–129.
  23. Clarke LP, Croft BS, Nordstrom R, et al. Quantitative imaging for evaluation of response to cancer therapy. Transl Oncol. 2009;2: 195–197.
  24. Toward Quantitative Imaging (TQI). Radiological Society of North America. 2010. http:// www.rsna.org/Research/TQI/index.cfm. Accessed online: April 6, 2010.
  25. Quantitative Imaging Biomarkers Alliance. Radiological Society of North America. 2010. http:// www.rsna.org/Research/qiba_intro.cfm. Ac-cessed online: April 6, 2010.
Back To Top

Radiology Reporting, Changes Worth Making Are Never Easy.  Appl Radiol. 

May 05, 2010
Categories:  Section



Copyright © Anderson Publishing 2016