Crowdsourcing Effort Seeks Machine-learning Tools to Automate Rheumatoid Arthritis Assessment
At the American College of Rheumatology (ACR) annual meeting, a multicenter team led by an investigator from Hospital for Special Surgery (HSS) presented the results from the RA2-DREAM Challenge, a crowdsourced effort focused on developing better methods to quantify joint damage in people with rheumatoid arthritis (RA).
Damage in the joints of people with RA is currently measured by visual inspection and detailed scoring on radiographic images of small joints in the hands, wrists and feet. This includes both joint space narrowing (which indicates cartilage loss) and bone erosions (which indicates damage from invasion of the inflamed joint lining). The scoring system requires specially trained experts and is time-consuming and expensive. Finding an automated way to measure joint damage is important for both clinical research and for care of patients, according to the study’s senior author, S. Louis Bridges, Jr., MD, PhD , physician-in-chief and chair of the Department of Medicine at HSS.
“If a machine-learning approach could provide a quick, accurate quantitative score estimating the degree of joint damage in hands and feet, it would greatly help clinical research,” he said. “For example, researchers could analyze data from electronic health records and from genetic and other research assays to find biomarkers associated with progressive damage. Having to score all the images by visual inspection ourselves would be tedious, and outsourcing it is cost prohibitive.”
“This approach could also aid rheumatologists by quickly assessing whether there is progression of damage over time, which would prompt a change in treatment to prevent further damage,” he added. “This is really important in geographic areas where expert musculoskeletal radiologists are not available.”
For the challenge, Dr. Bridges and his collaborators partnered with Sage Bionetworks, a nonprofit organization that helps investigators create DREAM (Dialogue on Reverse Engineering Assessment and Methods) Challenges. These competitions are focused on the development of innovative artificial intelligence-based tools in the life sciences. The investigators sent out a call for submissions, with grant money providing prizes for the winning teams. Competitors were from a variety of fields, including computer scientists, computational biologists and physician-scientists; none were radiologists with expertise or training in reading radiographic images.
For the first part of the challenge, one set of images was provided to the teams, along with known scores that had been visually generated. These were used to train the algorithms. Additional sets of images were then provided so the competitors could test and refine the tools they had developed. In the final round, a third set of images was given without scores, and competitors estimated the amount of joint space narrowing and erosions. Submissions were judged according to which most closely replicated the gold-standard visually generated scores. There were 26 teams that submitted algorithms and 16 final submissions. In total, competitors were given 674 sets of images from 562 different RA patients, all of whom had participated in prior National Institutes of Health-funded research studies led by Dr. Bridges. In the end, four teams were named top performers.
For the DREAM Challenge organizers, it was important that any scoring system developed through the project be freely available rather than proprietary, so that it could be used by investigators and clinicians at no cost. “Part of the appeal of this collaboration was that everything is in the public domain,” Dr. Bridges said.
Dr. Bridges explained that additional research and development of computational methods are needed before the tools can be broadly used, but the current research demonstrates that this type of approach is feasible. “We still need to refine the algorithms, but we’re much closer to our goal than we were before the Challenge,” he concluded.