Vision-Language Model for Multitask Medical Text Generation

23 Aug 2025 (modified: 01 Sept 2025)MICCAI 2025 Challenge FLARE SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal, Vision-language, Medical imaging applications
TL;DR: Vision-Language Model for Multitask Medical Text Generation
Abstract: Artificial intelligence (AI) has made significant progress in the healthcare domain, where multimodal large models integrating medical imaging and text have garnered considerable attention, yet remain challenging, particularly in generative tasks. This study develops a vision-language model architecture specifically tailored for medical scenarios, based on multimodal medical images (e.g., X-ray, ultrasound, and ophthalmic images) and their corresponding textual descriptions. The model demonstrates remarkable adaptability across diverse imaging modalities and integrates multiple key functionalities, including medical report generation, visual question answering (VQA), and lesion detection in medical images. In the regression task of the MICCAI FLARE 2025 Task 5 challenge, our model achieves state-of-the-art performance with an error of only 13.63 and a detection score of 0.80, classification score of 0.70. It exhibits potential as a unified interface for radiological diagnosis, promising to significantly enhance diagnostic efficiency across various medical imaging applications.
Submission Number: 1
Loading