A Promising future for AI in breast cancer screening
By Massat MB
A recently released report projects the world market for artificial intelligence (AI) and machine learning in medical imaging, including software for automated detection, quantification, decision support and diagnosis, will reach $2 billion by 2023. According to the report’s author, analyst Simon Harris, “The interest and enthusiasm for AI in the radiologist community has notably increased over the last 12 to 18 months and the discussion has moved on from AI as a threat to how AI will augment radiologists.”
Yet, Harris notes in his report that several barriers remain. The regulatory process remains challenging and more large-scale validation studies are needed to demonstrate the performance of deep learning algorithms in clinical settings. Additionally, with many start-up and specialist companies driving the application of machine learning to medical imaging, there is also the issue of integration challenges with established medical imaging software and systems.
Regardless of these barriers, Harris is optimistic in his view of the future. “Over the coming years, the combined R&D firepower of the expanding ecosystem will knock down the remaining barriers and radiologists will have a rapidly expanding array of AI-powered workflow and diagnostic tools at their disposal,” he says.
Breast cancer screening can lead to the detection of benign (noncancerous) findings, and reducing biopsies and surgeries of these benign findings is one clinical area where there has been much activity by industry and academia. Researchers at Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital and Harvard Medical School collaborated to develop a machine learning model to predict if a high-risk breast lesion identified on biopsy will be upgraded to cancer at surgery, or whether the lesion could be safely surveilled. Tested on 335 high-risk lesions, the model correctly diagnosed 97% of the breast cancers as malignant and reduced benign lesion surgeries by more than 30% compared to current methods.1 Currently, only 30% of breast biopsies are positive for cancer.
“No existing imaging method can determine which high-risk lesions will be upgraded to cancer at surgery and which won’t be,” says Manisha Bahl, MD, MPH, Director, Breast Imaging Fellowship Program at Massachusetts General Hospital (MGH) and lead author of the study. “Management of high-risk lesions is controversial; most are non-cancerous but surgery is recommended because high-risk lesions have the potential to be upgraded to cancer at the time of surgery.”
Dr. Bahl says that Regina Barzilay, MIT’s Delta Electronics Professor of Electrical Engineering and Computer Science, was inspired to apply her expertise in AI in breast cancer after her own experience with breast cancer detection and treatment. She partnered with Constance Lehman, MD, professor of radiology and chief of breast imaging at MGH and Dr. Bahl to apply AI across various projects—from high-risk lesions to ductal carcinoma in situ (DCIS) and mammogram interpretation. While the high-risk lesion algorithm has yet to be implemented into clinical practice at MGH, Dr. Bahl says they are using an algorithm developed through the MIT-MGH collaboration that assesses breast density. The algorithm was trained on breast density assessments by MGH radiologists, not quantitative methodology, Dr. Bahl explains.
At RSNA 2017, researchers from Radboud University Medical Centre in Nijmegen, Netherlands, presented findings that compared the performance of experienced radiologists with Transpara (ScreenPoint Medical BV, Nijmegan, The Netherlands), a deep learning computer detection system, in detecting breast cancer on mammograms.
The study included 24 radiologists who retrospectively reviewed more than 1,400 2D digital mammography exams, 336 of which were positive for cancer, 430 were benign and the remaining 669 were normal mammograms. The results showed no significant difference between automated reading with the Transpara software and reading by the radiologists.2
“There is tremendous potential to improve the process of reading screening mammograms and digital breast tomosynthesis, and machine learning will further expand that,” says Professor Nico Karssemeijer, PhD, CEO of ScreenPoint, in a company press release. “What we are developing now with the new machine learning and deep learning techniques is evolving, however, we know that the scope will be much wider than that of existing CAD systems for mammography.”
For example, radiologists use CAD after they have reviewed the images to ensure nothing was missed. With Transpara, radiologists have support available concurrently when they are reading the study. According to Professor Karssemeijer, one abstract accepted at RSNA will demonstrate that when the radiologists uses the solution their performance improves without taking more time.
A second abstract accepted at RSNA is a follow-up to last year’s study. It will report the performance of over 100 radiologists who used Transpara as a stand-alone reading aid.
AI can also assist in distinguishing types of cancer cells with nearly 100% accuracy, according to a recent study by researchers at Weill Cornell Medicine and New York-Presbyterian. The researchers developed a convolutional neural network (CNN), a computer program modeled on a human brain, to analyze pathology images and determine if they are malignant; if malignant, the program can also indicate what type of cancer is present. In addition to the CNN architecture, the program utilized Google’s Inception with three training strategies and two state-of-the-art algorithms, Inception and ResNet.
To train the CNN, the researchers exposed the program to thousands of pathology images of known breast, lung and bladder cancers. Then, the researchers obtained more than 13,000 new pathology images of breast, lung and bladder cancer to test the algorithms. The network distinguished the type of cancer in the samples with 100% accuracy and could also determine lung cancer subtypes with 92 accuracy. Additionally, the program identified biomarkers for breast and bladder cancer with 91% and 99% accuracy, respectively.3
In the UK, a consortium of leading breast cancer experts, clinicians, academia and AI industry are partnering to explore whether AI can help to detect and diagnose breast cancer more efficiently. Led by Imperial College London, the consortium is based at the Cancer Research UK Imperial Centre—a partnership between the college, Imperial College Healthcare NHS Trust and Cancer Research UK—and will work with DeepMind Health and the AI health research team at Google.
Machine learning technology from DeepMind Health and the AI research team at Google will be applied to approximately 7,500 mammograms provided by the Cancer Research UK-funded OPTIMAM database at the Royal Surrey County Hospital NHS Foundation Trust. The team plans to evaluate the possibility of training the computer algorithm to analyze the images for signs of cancer and alert radiologists more accurately than is possible with current technology.
“Radiology has already benefitted so much from advances in technology and the implementation of AI will be another giant leap for our specialty,” says Alyssa Watanabe, MD, Clinical Associate Professor at the University of Southern California Keck School of Medicine and Chief Medical Officer at CureMetrix, Inc. “AI will be a tremendous boost for breast imaging. Mammography is truly the most difficult to read of all medical imaging studies.”
She adds, “Half of breast cancers can be seen retrospectively and tremendous resources are spent on false positive workups and biopsies. Improving accuracy and reducing costs will give more strength to the benefits of mass breast cancer screening.”
Dr. Watanabe has been involved in several clinical studies evaluating CureMetrix’s technology as part of a team of researchers at USC. At RSNA 2017, she presented results that showed approximately 50% of benign biopsies could be eliminated with the use of AI-based biopsy classifier software for mammography. At this year’s meeting, she’ll present results from a reader’s study that shows a statistically significant benefit from the use of AI in medical imaging.
According to Dr. Watanabe, the most desired improvement for imaging analysis software in mammography is the reduction in false positives per image (FPPI). In one study conducted by CureMetrix, 28% of false positive recalled cases could have been avoided. Another study reported a 69% reduction in false positive flags per image or FPPI compared to traditional CAD. A third study, a retrospective study of mammograms where breast cancers were initially missed using conventional CAD, cancer detection rates increased 27% on average yet false positive markings increased by less than 1%.
“Any method to improve the performance of mammography interpretation could tremendously affect patient care, radiologist workflow and system costs,” Dr. Watanabe says.
“Clever algorithms based on AI/machine learning make CAD more intelligent and effective, improving lesion detection and classification,” says Lawrence Tanenbaum, MD, FACR, Director of MRI, CT and Advanced Imaging, and Vice President and Medical Director, Eastern Operations, RadNet, Inc. “There is promise that the enhanced pattern recognition in machine learning based tools will assist in the identification of cancers before they are evident to a radiologist reader.”
Dr. Tanenbaum cautions, however, that accomplishing this requires a lot of heavy lifting. AI companies deliver or require high quality data that is characterized and annotated to generate high quality output with pathological proof for appropriate model training. He adds that ideally a data learning architecture could be applied to multiple varied applications across medical imaging, such as mammography, lung and colon screening.
Dr. Bahl agrees that development of AI in breast imaging, as well as other imaging specialties, will need a large number of cases for training the algorithms and that proprietary data may restrict access. However, she points out that through agreements such as the one between MIT and MGH, these barriers can be overcome. And the data that centers such as MGH generate—with upwards of 150 screening mammograms performed each day by Dr. Bahl’s estimate—is invaluable to the development and validation of AI algorithms in breast imaging.
As director of the breast imaging fellowship program at MGH, Dr. Bahl believes it is important for current residents and fellows to understand AI terminology and use of this technology.
“Machine learning in breast imaging is in its infancy but I’m excited about its potential to improve clinical decision-making and decrease the morbidity and costs of overtreatment,” says Dr. Bahl.
“This is an exciting time for radiology,” Dr. Tanenbaum says. “AI will undoubtedly enhance our capabilities and importance in the imaging enterprise.”
References
- Bahl M, Barzilay R, Yedidia AB, et al. High-Risk Breast Lesions: A machine learning model to predict pathologic upgrade and reduce unnecessary surgical excision. Radiology. 2018 Mar;286(3):810-818.
- Available at: https://www.prweb.com/releases/ 2017/11/prweb14932866.htm.
- Khosravi P, Kazemi E, Imielinski M, Elemento O, Hajirasouliha I. Deep convolutional neural networks enable discrimination of heterogeneous digital pathology images. EBioMedicine. 2018; 27:317–328.
- Holland K, Sechopoulow I, Mann RM, et al. Influence of breast compression pressure on the performance of population-based mammography screening. Breast Cancer Research. 2017; 19:126