In AI, Experience Matters: A Guide to Navigating the Regulatory and Governance Environment

By Giannikopoulos D

As artificial intelligence (AI) becomes more prevalent in health care, discussions around regulation and governance are on the rise. How can patient data be secured and protected? How does responsible AI deployment and monitoring occur? Should health systems be skeptical that technology can be a diagnostic aid and potentially impact treatment decisions? Health system leaders are navigating this evolving frontier as well as AI regulations, which seem to be outpaced only by the innovation of the technology itself.

While navigating between codified regulations and guidelines, many leaders are grappling with how to have effective conversations around the latest regulatory requirements while establishing governance structures. While no governing body or group currently has complete authority over AI governance, regulatory measures such as the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence 1 and the Office of the National Coordinator for Health Information Technology’s Health, Data, Technology, and Interoperability (HTI-1) Final Rule 2 establish rules for AI use in health care.

Guidelines such as the American Medical Association Principles, 3 Evaluating Commercial AI Solutions in Radiology (ECLAIR) Guidelines, 4 and Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA, 5 meanwhile, offer recommendations for various aspects of AI from accountability to transparency, vendor, and value assessments.

It is no secret that clinical AI is a burgeoning field, one that has witnessed incredible growth over the past few years. As of October 2023, the Food and Drug Administration (FDA) listed 691 approved AI/machine learning-enabled medical devices, 6 of which roughly 15% were approved in 2023 alone.

As AI’s role expands throughout health systems, so should the conversations across teams. But what should those discussions entail and who can help provide much-needed insights to navigate this ever-changing landscape?

AI partners can play a vital role in this regard. Similar to how one might entrust a financial advisor to manage their money, an experienced AI partner often garners a deep familiarity with FDA requirements, in-depth industry expertise, a wealth of health system experience with AI, and is able to facilitate collaboration between institutions. This experience in these relatively novel regulatory realms can prove invaluable to health systems.

To maximize AI support, an AI partner should proactively be discussing and have knowledge or experience to help navigate the following key areas:

Understanding of the AI partner’s infrastructure in place to ensure regulatory compliance alongside the increasingly complex landscape. It stands to reason that many AI products, while offering great potential for impacting specific use cases, may not have the infrastructure required to direct health systems through the complexities of AI regulation. With constant development on this horizon, and even signs of regional AI measures (see the recent Utah enactment of AI-focused consumer protection) , 7 partners with dedicated regulatory and legal teams are ahead of the curve as they can ensure compliance even in the most nuanced circumstances.

Direct experience with regulatory pathways such as FDA clearance and/or CE marking processes for European conformity. Many AI partners have a keen understanding of the intricacies involved in navigating the regulatory clearance process(es). Those with experience serve as a better partner for health systems that have reservations about AI adoption, owing to their experience in addressing issues surrounding risk mitigation and interoperability against the highest standards.

Real-life experience measuring the performance and value of AI in clinical environments. AI regulations, guidelines, and publications focus on demonstrating real-world performance and value, and there are multiple ways to measure these impacts. With radiology, AI might start with interpretation and turnaround times, but value is often demonstrated with additional service-line impact, for example, when utilizing AI to integrate and streamline pulmonary embolism response teams. Downstream impacts could include reduced length of stay for specific pathologies or enhanced disease awareness that led to early diagnosis and treatment.

Plans to adapt to the changing environment. A strong AI partner should have a comprehensive plan to adapt to the evolving regulations and ensure the long-term success of your AI investment. This includes strategies for drift mitigation, model retraining, ongoing maintenance, and proxies for performance. Additionally, they should demonstrate the ability to scale AI solutions beyond radiology and across the entire health care enterprise, showing a commitment to innovation and growth in AI implementation.

As AI becomes increasingly integrated into clinical workflows, AI partner scrutiny will become even more important to sustaining AI success. The ECLAIR guidelines, 4 for example, are already setting a framework for those seeking commercial AI solutions in radiology.

With varying regulations and guidelines shaping the decisions of health care leaders, it is essential to remain prudent in assessing partners beyond their promises to improve workflow or care delivery; this includes evaluating their regulatory infrastructure and experience in navigating regulatory environments.

Overall, a resilient AI ecosystem that lives up to the hype must focus on enhancing patient care, fostering health system innovation, and upholding the highest technological integrity and accountability standards as AI becomes indispensable.

References

1. The White House. Executive order on the safe, secure, and trustworthy development and use of artificial intelligence; 2023. Accessed 16 July 2024. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
2. HealthIT.gov. Office of the national coordinator for health information technology. Health data, technology, and interoperability: Certification program updates, algorithm transparency, and information sharing (HTI-1) final rule. Accessed 16 July 2024. https://www.healthit.gov/topic/laws-regulation-and-policy/health-data-technology-and-interoperability-certification-program
3. American Medical Association. AMA issues new principles for AI development, deployment & use; 2023. Accessed 16 July 2024. https://www.ama-assn.org/press-center/press-releases/ama-issues-new-principles-ai-development-deployment-use
4. Omoumi P Ducarouge A Tournier A et al. To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines). Eur Radiol. 2021; 31 ( 6 ): 3786 - 3796. 10.1007/s00330-020-07684-x
5. Brady AP Allen B Chong J et al. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. Radiol Artif Intel. 2024; 6 ( 1 ): 230513. 10.1148/ryai.230513
6. Joshi G Jain A Araveeti SR et al. FDA-approved artificial intelligence and machine learning (AI/ML)-enabled medical devices: an updated landscape. Electronics. 2024; 13 ( 3 ): 498. 10.3390/electronics13030498
7. GreenbergTraurig. GreenbergTraurig. Utah Enacts First AI-Focused Consumer Protection Legislation in US. Accessed 16 July 2024. https://www.gtlaw.com/en/insights/2024/4/utah-enacts-first-ai-focused-consumer-protection-legislation-in-us
Giannikopoulos D. (Aug 01, 2024). In AI, Experience Matters: A Guide to Navigating the Regulatory and Governance Environment. Appl Radiol. 2024; 53(4):34 - 35.
© Anderson Publishing, Ltd. 2024 All rights reserved. Reproduction in whole or part without express written permission Is strictly prohibited.