Medical imaging occupies an important place in modern healthcare. It improves both accuracy, reliability and the development of treatments for various diseases. Artificial intelligence has also been widely used to further improve the process.
However, conventional medical image diagnosis that uses AI algorithms requires large amounts of annotations as supervisory signals for model training. To acquire accurate labels for AI algorithms – radiologists, as part of clinical routine, prepare radiology reports for each of their patients, followed by annotation staff extracting and confirming structured labels from these reports using human-defined rules and existing natural language processing (NLP tools). The ultimate accuracy of extracted tags depends on the quality of human work and various NLP tools. The method comes at a high price, being both laborious and time-consuming.
A team of engineers from the University of Hong Kong (HKU) has developed a new “REFERS” approach (Reviewing Free-text Reports for Supervision), which can reduce the human cost by 90%, by enabling the automatic acquisition of supervision signals from hundreds of thousands of radiology reports at the same time. It achieves high accuracy in predictions, surpassing its counterpart in conventional medical image diagnosis using AI algorithms.
The innovative approach marks an important step towards realizing widespread medical artificial intelligence. The breakthrough was published in Intelligence of natural machines in the article entitled “Generalized learning of radiographic representation via cross-supervision between images and free-text radiology reports”.
Professor YU Yizhou, team leader of HKU’s Department of Computer Science under the Faculty of Engineering, said that AI-enabled medical imaging diagnosis has the potential to help medical specialists reduce their workload and improve diagnostic efficiency and accuracy, including but not limited to reducing diagnostic time and detecting subtle disease patterns.
The team believes that the abstract and complex logical reasoning sentences in radiology reports provide enough information to learn easily transferable visual features. With proper training, REFERS learns radiographic representations directly from free-text reports without the need to involve labor in labeling, Professor Yu noted.
For REFERS training, the research team uses a public database with 370,000 X-ray images and associated X-ray reports, of 14 common lung diseases, including atelectasis, cardiomegaly, pleural effusion, pneumonia and pneumothorax.
The researchers were able to build an X-ray recognition model using just 100 X-rays and achieve 83% accuracy in predictions. When the number was increased to 1,000, their model performed astonishingly with an accuracy of 88.2%, which outperformed its trained counterpart with 10,000 radiologist annotations (87.6% accuracy). When 10,000 x-rays have been used, the accuracy is 90.1%. In general, greater than 85% accuracy in predictions is useful in real-world clinical applications.
REFERS achieves the goal by accomplishing two report-related tasks, i.e. report generation and x-ray to report matching. In the first task, REFERS translates the x-rays into textual reports by first encoding the x-rays into an intermediate representation, which is then used to predict the textual reports through a network of decoders.
A cost function is defined to measure the similarity between the predicted and actual report texts, based on which a gradient-based optimization is used to train the neural network and update its weights.
Regarding the second task, REFERS first encodes the x-rays and free-text reports in the same semantic space, where the representations of each report and its associated x-rays are aligned via contrastive learning.
The first author of the paper, Dr. ZHOU Hong-Yu, said that compared to conventional methods that rely heavily on human annotations, REFERS can acquire supervision of every word in radiology reports. The amount of data annotations can be greatly reduced (i.e. by 90%) and the construction cost of medical artificial intelligence is also reduced. This marks an important step towards achieving widespread medical artificial intelligence.