Adapting Pretrained Vision-Language Foundational Models to Medical Imaging DomainsDownload PDF

05 Oct 2022 (modified: 29 Sept 2024)FMDM@NeurIPS2022Readers: Everyone
Keywords: Foundational model, multi-modal, stable diffusion, domain adaptation, fine-tuning, medical imaging, radiology
TL;DR: Text conditioned generative models can be fine-tuned to generate clinically accurate images, capable of inserting abnormalities and creating high-quality synthetic data.
Abstract: Multi-modal foundational models are trained on millions of pairs of natural images and texts, frequently obtained through web-crawling approaches. Although their performance is excellent, these models do not generalize well to other domains, such as medical imaging, especially when these domains do not resemble the centric-like images that can be found on the web. In this study, we assess the ability of the stable diffusion model to generate domain-specific images in the particular case of medical imaging. Based on quantitative and qualitative evaluations of the main components of the stable diffusion pipeline (the variational autoencoder, the U-Net and the text-encoder), we explore several approaches to fine-tune stable diffusion to generate radiological images, which accurately represent the clinical content of conditional text prompts. Our best-performing model improves upon the stable diffusion baseline and can be correctly conditioned to insert an abnormality on a synthetic radiology image.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/adapting-pretrained-vision-language/code)
0 Replies

Loading