LEVERAGING IMAGE EDITING FOUNDATION MODELS FOR DATA-EFFICIENT CT METAL ARTIFACT REDUCTION
Keywords: CT restoration, few-shot learning, image editing models, parameter-efficient fine-tuning, medical reasoning
TL;DR: We adapt a general-purpose image editing diffusion model for CT metal artifact reduction using LoRA, achieving state-of-the-art perceptual quality with only 16–128 training pairs, two orders of magnitude less data than conventional methods.
Abstract: Metal artifacts from high-attenuation implants severely degrade CT image quality, obscuring critical anatomical structures and posing a challenge for standard deep learning methods that require extensive paired training data. We propose a paradigm shift: reframing artifact reduction as an in-context reasoning task by adapting a general-purpose vision-language diffusion foundation model via parameterefficient Low-Rank Adaptation (LoRA). By leveraging rich visual priors, our approach achieves effective artifact suppression with only 16 to 128 paired training examples reducing data requirements by two orders of magnitude. Crucially, we demonstrate that domain adaptation is essential for hallucination mitigation; without it, foundation models interpret streak artifacts as erroneous natural objects (e.g., waffles or petri dishes). To ground the restoration, we propose a multi-reference conditioning strategy where clean anatomical exemplars from unrelated subjects are provided alongside the corrupted input, enabling the model to exploit category-specific context to infer uncorrupted
anatomy. Extensive evaluation on the AAPM CT-MAR benchmark demonstrates that our method achieves state-ofthe-art performance on perceptual and radiological-feature metrics . This work establishes that foundation models, when appropriately adapted, offer a scalable alternative for interpretable, data-efficient medical image reconstruction. Code is available at https://github.com/ahmetemirdagi/CT-EditMAR.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 21
Loading