A Multi-Modal Feature Fusion-Based Approach for Chest X-Ray Report Generation
Résumé
Given the growing dependence on medical imaging, there is a significant requirement for automated report generation, which can save the radiologist's time and reduce the possibility of diagnostic errors. Existing approaches face various difficulties, including insufficient professionalism, a variety of diseases, and fluency in reports. These problems are the result of the use of an encoder-decoder deep learning architecture to establish a uni-directional image-to-report relationship and neglect the bidirectional connections between images and reports, making it challenging to establish the intrinsic medical correlations between them. To this end, we propose a novel approach for chest radiology report generation based on multi-modal feature fusion. Our method uses textual and visual features that are taken from medical chest X-ray images and their real reports. Firstly, we use a vision transformer to extract visual features from medical images; on the other hand, we use the Word2Vec model to extract semantic features from textual medical reports. Additionally, we employ advanced techniques such as channel attention networks and cross-modal information fusion modules to enhance the quality and coherence of the generated reports. We have evaluated our proposed approach on two publicly available chest X-ray datasets, IU X-ray and NIH. The results show that our approach outperforms state-of-the-art methods. Particularly in the ROUGE metric and BLEU metric.