Search

Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports

Feb 25, 2022

empty

An interdisciplinary research team, led by Professor Yu Yizhou at the Department of Computer Science, had worked on a research topic “Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports”. The research paper was recently published in  Nature Machine Intelligence on January 20, 2022.

Details of the paper:

Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports

Hong-Yu Zhou, Xiaoyu Chen, Yinghao Zhang, Ruibang Luo, Liansheng Wang & Yizhou Yu

Article in Nature Machine Intelligence 4, pages32–40 (2022)

 

Abstract:

Pre-training lays the foundation for recent successes in radiograph analysis supported by deep learning. It learns transferable image representations by conducting large-scale fully- or self-supervised learning on a source domain; however, supervised pre-training requires a complex and labour-intensive two-stage human-assisted annotation process, whereas self-supervised learning cannot compete with the supervised paradigm. To tackle these issues, we propose a cross-supervised methodology called reviewing free-text reports for supervision (REFERS), which acquires free supervision signals from the original radiology reports accompanying the radiographs. The proposed approach employs a vision transformer and is designed to learn joint representations from multiple views within every patient study. REFERS outperforms its transfer learning and self-supervised learning counterparts on four well-known X-ray datasets under extremely limited supervision. Moreover, REFERS even surpasses methods based on a source domain of radiographs with human-assisted structured labels; it therefore has the potential to replace canonical pre-training methodologies.

 

Link: https://www.nature.com/articles/s42256-021-00425-9