Going with the (Work) Flow in Radiology

By Balcombe J, Tanenbaum LN

Early expectations that AI-empowered computers would one day render radiologists obsolete have proved premature and far from becoming reality. Instead, many of the myriad AI-powered applications on the imaging market today target just one or two stages of the radiology workflow, albeit with impressive performance and results.

The Radiology Workflow

But before we get into discussing them, it’s worth briefly reviewing the workflow followed by virtually every radiology enterprise—private and hospital based—around the world. It begins when a referring health care professional requests a study based on their clinical interaction with a patient. A payer then approves the study, which is then protocolled by a radiologist, performed by a technologist, and finally returned to the radiologist for interpretation. The report is conveyed to the referring health care professional. Ideally billed accurately and completely, the study may or may not be audited by the payer for quality and compliance.

Each step in this straightforward journey is vulnerable to error or suboptimal decision making or execution. Current AI-driven offerings typically impact one of these steps; the underlying AI algorithms that power them are based mostly on computer vision (CV) analysis or natural language processing (NLP).

Hospital-based sub-specialists such as neurosurgeons or urologists are familiar enough with the range of studies most commonly utilized to answer their clinical questions. In contrast, family physicians and other generalists may not have the radiologic knowledge required to select the optimal modality to evaluate a particular clinical syndrome.

An AI technology that can flag apparent discrepancies between the clinical question and the requested study can improve diagnostic yield; thereby improving patient care and reducing the expense associated with incorrect utilization. A product that can perform this task would require NLP, which could evaluate not only the study request, but also the patient history and clinical findings, to determine whether the requested study and/or protocol are appropriate to the case.

Artificial intelligence has a budding role within our imaging devices. Correct patient positioning at the center of the CT bore, a task that busy technologists sometimes fail to perfect, ensures optimal radiation dose and image quality. One study comparing manual positioning with AI-assisted automatic patient positioning using a GE Revolution Maxima showed a 16% reduction in dose, along with shorter duration and improved image signal-to-noise ratio for the AI-assisted method.1

AI-driven tools can also automatically set the range for an imaging exam to prevent over- and under-scanning. Patient motion or misalignment of the field of view resulting in exclusion of part of a key organ can render a CT or X-ray series non-diagnostic. A harried technologist might miss this problem, but an algorithm can monitor and prompt consideration of a repeat acquisition.

To cite another example, metal-containing implants cause artifacts on CT and MR images. Rather than relying solely on completeness of the patient history or alertness of the technologist, scanners can employ AI tools to detect implanted metal on scout/localizer images and then automatically implement metal-artifact reduction sequences for MRI and reconstruction algorithms for CT.

Upon completion of a study, the images are sent to a PACS for interpretation. Studies are usually prioritized according to origin or urgency; for example, studies ordered by the emergency department are marked for immediate reading. Inevitably, however, some studies with acute findings end up languishing on a worklist. Several computer vision products on the market are designed to evaluate for hemorrhage, stroke, aortic dissection, pulmonary embolism, and other clinically urgent findings and prioritize positive studies for prompt interpretation.

Additionally, in the setting of acute stroke, notifications to the referring team can be automated, minimizing the time to intervention. A clinical trial of one ischemic stroke-detection product demonstrated a 22-minute time savings in transferring patients from a primary stroke center to a comprehensive stroke center, and an 89-minute time savings between patient arrival at the primary center to the start of the interventional procedure.2

Many other commercially available image-analysis algorithms, including those from DeepHealth, Curemetrix, Kheiron, AIDOC, Gleamer, and Qure.AI, can evaluate for specific lesions such as pulmonary nodules, pneumonia, breast cancer, and fracture. Some incorporate the ability to evaluate prior studies (eg, for lung nodule growth) or attempt to characterize the aggressiveness of a lesion (lung nodule malignancy score).

However, image analysis algorithms (eg, for pulmonary nodule detection) present challenges. The combination of true and false positive flags can lead to “alert fatigue.” For the radiologist, who will detect the majority of these findings without AI assistance, most alerts prove either unnecessary or false. As a result, few radiologists put these products to use.

By leveraging a combination of computer vision image analysis and NLP of the radiologist’s report, one dual AI algorithm (DualiQ, Imedis AI) attempts to call attention only to actionable, unreported findings like pulmonary nodules, dilated aortas, and liver and pancreatic lesions, and to prevent alerts for findings already noted. This tool has been shown to lead to a 13% increase in detection of actionable findings with minimal unnecessary alerts.3 Alternatively, the application can curate selected, high-yield exams for retrospective quality assurance (QA), outperforming the low rate of discrepancies found in typical, randomized QA processes.4

Natural language processing may garner less public attention than computer vision analysis, but a plethora of NLP products are becoming available to help maximize the effectiveness of radiology reports. Among the capabilities of these products are the ability to:

  • Detect laterality concordance errors between the report body and impression;
  • Automatically generate the report impression;
  • Detect reported actionable findings and generate evidence-based recommendations (eg, follow- up CT study for a 7mm pulmonary nodule);
  • Convey follow-up recommendations to clinicians and/or verify scheduling of follow-up scans;
  • Detect billing errors, including discrepancies among the study performed and the reported study description, the absence of clinical history, etc.; and,
  • Check report compliance with Merit-based Incentive Payment System measures.

Grand visions of AI superseding the role of radiologists have faded. Instead, a collection of AI products is developing to aid the efficiency, accuracy, or timeliness of the radiology workflow. Many boast impressive reductions in time to treatment, radiation dose, missed actionable findings, and improved implementation of radiologist recommendations. The fragmentary nature of the market makes deploying AI solutions across the enterprise difficult for radiology departments. However, the AI market for radiology remains in its infancy. In its maturation lies the promise of a future in which a single integration can enable application of multiple solutions.

References

  1. Gang Y, Chen X, Li H, et al. A comparison between manual and artificial intelligence-based automatic positioning in CT imaging for COVID-19 patients. Eur Radiol. 2021;31: 6049-6058. https://doi.org/10.1007/s00330-020-07629-4.
  2. Hassan AE, Ringheanu VM, Rabah RR, Preston L, Tekle WG, Qureshi AI. Early experience utilizing artificial intelligence shows significant reduction in transfer times and length of stay in a hub and spoke model. Interv Neuroradiol. 2020 Oct;26(5):615-622. doi: 10.1177/1591019920953055. Epub 2020 Aug 26. PMID: 32847449; PMCID: PMC7645178.
  3. Yen A, Pfeffer Y, Blumenfeld A, Balcombe JN, Berland LL, Tanenbaum L, Kligerman SJ. Use of a dual artificial intelligence platform to detect unreported lung nodules. J Comput Assist Tomogr. 2021 Mar-Apr 01;45(2):318-322. doi: 10.1097/RCT.0000000000001118. PMID: 33273162.
  4. Itri JN, Donithan A, Patel SH. Random versus nonrandom peer review: a case for more meaningful peer review. J Am Coll Radiol. 2018 Jul;15(7):1045-1052. doi: 10.1016/j. jacr.2018.03.054. Epub 2018 May 25. PMID: 29807816.

References

  1. Gang Y, Chen X, Li H, et al. A comparison between manual and artificial intelligence-based automatic positioning in CT imaging for COVID-19 patients. Eur Radiol. 2021;31: 6049-6058. https://doi.org/10.1007/s00330-020-07629-4.
  2. Hassan AE, Ringheanu VM, Rabah RR, Preston L, Tekle WG, Qureshi AI. Early experience utilizing artificial intelligence shows significant reduction in transfer times and length of stay in a hub and spoke model. Interv Neuroradiol. 2020 Oct;26(5):615-622. doi: 10.1177/1591019920953055. Epub 2020 Aug 26. PMID: 32847449; PMCID: PMC7645178.
  3. Yen A, Pfeffer Y, Blumenfeld A, Balcombe JN, Berland LL, Tanenbaum L, Kligerman SJ. Use of a dual artificial intelligence platform to detect unreported lung nodules. J Comput Assist Tomogr. 2021 Mar-Apr 01;45(2):318-322. doi: 10.1097/RCT.0000000000001118. PMID: 33273162.
  4. Itri JN, Donithan A, Patel SH. Random versus nonrandom peer review: a case for more meaningful peer review. J Am Coll Radiol. 2018 Jul;15(7):1045-1052. doi: 10.1016/j. jacr.2018.03.054. Epub 2018 May 25. PMID: 29807816.
Balcombe J, Tanenbaum LN. (Mar 04, 2022). Going with the (Work) Flow in Radiology. Appl Radiol. 2022; 51(2):24-26.

Affiliation: Dr. Balcombe is section head of CT and patient safety, Assuta Medical Centres, Israel and chief medical officer of Imedis AI. Dr Tanenbaum is vice president and chief technology officer at RadNet, Inc., New York, New York. He is a member of the Editorial Advisory Board of Applied Radiology.

© Anderson Publishing, Ltd. 2024 All rights reserved. Reproduction in whole or part without express written permission Is strictly prohibited.