Can Frozen Transformers in Large Language Models Help with Medical Image Segmentation?

Published: 27 Apr 2024, Last Modified: 09 Jun 2024MIDL 2024 Short PapersEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Medical image segmentation, Large Language Models, Frozen transformers, TransUNet
Abstract: Transformer models shine in medical image segmentation by harnessing their self-attention mechanism to capture global information, thus boosting segmentation accuracy. Recent research has unveiled that large language models (LLMs), trained solely on text, surprisingly excel at visual tasks even without language, through a simple strategy: integrating a frozen transformer block from pre-trained LLMs as a direct visual token processor. This paper applies this approach to medical image segmentation by combining frozen transformer blocks with TransUNet. Experiments are conducted on BTCV, ACDC, ISIC 2017, CVC-ClinicDB, CVC-ColonDB and BUSI datasets, demonstrating some improvements compared to the baseline.
Submission Number: 180
Loading