An Unsupervised Approach for Artifact Severity Scoring in Multi-Contrast MR Images

Published: 27 Mar 2025, Last Modified: 01 May 2025MIDL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: MRI, Quality Assurance, Artifact Detection
TL;DR: We introduce an unsupervised QA framework that quantifies MR image artifact severity for automated quality assessment.
Abstract: Quality assurance (QA) in magnetic resonance (MR) imaging is critical but remains a challenging and time-intensive process, particularly when working with large-scale, multi-site imaging datasets. Manual QA methods are subjective, prone to inter-rater variability, and impractical for high-throughput workflows. Existing automated QA methods often lack generalizability to diverse datasets or fail to provide interpretable insights into the causes of poor image quality. To address these limitations, we introduce an unsupervised and interpretable QA framework for multi-contrast MR images that quantifies artifact severity. By assigning a numerical score to each image, our method enables objective, consistent evaluation of image quality and highlights specific levels of artifact presence that can impair downstream analysis. Our framework employs an unsupervised contrastive learning approach, leveraging simulated artifact transformations, including random bias, noise, anisotropy, and ghosting, to train the model without requiring manual labels or preprocessing. A margin-based contrastive loss further enables differentiation between varying levels of artifact severity. We validate our framework using simulated artifacts on a public dataset and real artifacts on a private clinical dataset, demonstrating its robustness and generalizability for automatic MR image QA. By efficiently evaluating image quality and identifying artifacts prior to data processing, our approach streamlines QA workflows and enhances the reliability of subsequent analyses in both research and clinical settings.
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Interpretability and Explainable AI
Paper Type: Methodological Development
Registration Requirement: Yes
Submission Number: 187
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview