AI Model Accurately Identifies Tumors and Diseases in Medical Images
An artificial intelligence model developed by researchers at the Beckman Institute for Advanced Science and Technology accurately identifies tumors and diseases in medical images and is programmed to explain each diagnosis with a visual map. The tool’s unique transparency allows doctors to easily follow its line of reasoning, double-check for accuracy, and explain the results to patients.
"The idea is to help catch cancer and disease in its earliest stages — like an X on a map — and understand how the decision was made. Our model will help streamline that process and make it easier on doctors and patients alike,” said Sourya Sengupta, the study’s lead author and a graduate research assistant at the Beckman Institute.
This research appeared in IEEE Transactions on Medical Imaging.
An AI model functioning as doctor’s assistant is raised on a diet of thousands of medical images, some with abnormalities and some without. When faced with something never-before-seen, it runs a quick analysis and spits out a number between 0 and 1. If the number is less than .5, the image is not assumed to contain a tumor; a numeral greater than .5 warrants a closer look.
Sengupta’s new AI model mimics this setup with a twist: the model produces a value plus a visual map explaining its decision.
The map — referred to by the researchers as an equivalency map, or E-map for short — is essentially a transformed version of the original X-ray, mammogram, or other medical image medium. Like a paint-by-numbers canvas, each region of the E-map is assigned a number. The greater the value, the more medically interesting the region is for predicting the presence of an anomaly. The model sums up the values to arrive at its final figure, which then informs the diagnosis.
“For example, if the total sum is 1, and you have three values represented on the map — .5, .3, and .2 —a doctor can see exactly which areas on the map contributed more to that conclusion and investigate those more fully,” Sengupta said.
This way, doctors can double-check how well the deep neural network is working — like a teacher checking the work on a student’s math problem — and respond to patients’ questions about the process.
“The result is a more transparent, trustable system between doctor and patient,” Sengupta said.
The researchers trained their model on three different disease diagnosis tasks including more than 20,000 total images.
First, the model reviewed simulated mammograms and learned to flag early signs of tumors. Second, it analyzed optical coherence tomography images of the retina, where it practiced identifying a buildup called Drusen that may be an early sign of macular degeneration. Third, the model studied chest X-rays and learned to detect cardiomegaly, a heart enlargement condition that can lead to disease.
Once the mapmaking model had been trained, the researchers compared its performance to existing black-box AI systems — the ones without a self-interpretation setting. The new model performed comparably to its counterparts in all three categories, with accuracy rates of 77.8% for mammograms, 99.1% for retinal OCT images, and 83% for chest x-rays compared to the existing 77.8%, 99.1%, and 83.33.%
These high accuracy rates are a product of the deep neural network, the non-linear layers of which mimic the nuance of human neurons.
To create such a complicated system, the researchers peeled the proverbial onion and drew inspiration from linear neural networks, which are simpler and easier to interpret.
“The question was: How can we leverage the concepts behind linear models to make non-linear deep neural networks also interpretable like this?” said principal investigator Mark Anastasio, a Beckman Institute researcher and the Donald Biggar Willet Professor and Head of the Illinois Department of Bioengineering. “This work is a classic example of how fundamental ideas can lead to some novel solutions for state-of-the-art AI models.”
The researchers hope that future models will be able to detect and diagnose anomalies all over the body and even differentiate between them.
“I am excited about our tool’s direct benefit to society, not only in terms of improving disease diagnoses, but also improving trust and transparency between doctors and patients,” Anastasio said.