Bidirectional Learning for the Visual Representation in Radiology Report Generation with Frozen LLMs

22 Sept 2024 (modified: 03 Dec 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bidirectional Learning, Radiology Report Generation, Representation Learning, Large Language Models.
Abstract: Radiology report generation (R2Gen) has recently leveraged large language models (LLMs), achieving improved results. However, the generated reports still fall short in both language accuracy and clinical relevance. A key challenge is learning a visual representation of radiology images that an LLM can effectively interpret. To address this, we propose that for a visual representation to be interpretable by an LLM, it shall also be generatable by the LLM. Building on this idea, we introduce a novel bidirectional learning framework for R2Gen, integrating both vision-to-text and text-to-vision information to enhance visual representation learning. First, we require that the visual representation aid the LLM in generating reports that closely match the ground truth. Second, we require that the visual representation be maximally generated by the LLM when provided with the ground truth report. To enable the frozen LLM to perform text-to-vision generation, we jointly train a new text encoder for reports. Additionally, through an image reconstruction task, we encourage the visual representation to capture the core features of input radiology images. This bidirectional learning framework is realized using a frozen LLM and incurs no extra computational cost at the inference stage. Experimental results demonstrate better alignment between the learned visual representation and the LLM’s word embedding space, along with state-of-the-art performance in both language accuracy and clinical efficacy. Our code will be publicly released.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2568
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview