Keywords: MRI, Quality Assurance, Artifact Detection
TL;DR: We introduce an unsupervised QA framework that quantifies MR image artifact severity for automated quality assessment.
Abstract: Quality assurance (QA) in magnetic resonance (MR) imaging is critical but remains a challenging and time-intensive process, particularly when working with large-scale, multi-site imaging datasets. Manual QA methods are subjective, prone to inter-rater variability, and impractical for high-throughput workflows. Existing automated QA methods often lack generalizability to diverse datasets or fail to provide interpretable insights into the causes of poor image quality. To address these limitations, we introduce an unsupervised and interpretable QA framework for multi-contrast MR images that quantifies artifact severity. By assigning a numerical score to each image, our method enables objective, consistent evaluation of image quality and highlights specific levels of artifact presence that can impair downstream analysis. Our framework employs an unsupervised contrastive learning approach, leveraging simulated artifact transformations, including random bias, noise, anisotropy, and ghosting, to train the model without requiring manual labels or preprocessing. A margin-based contrastive loss further enables differentiation between varying levels of artifact severity. We validate our framework using simulated artifacts on a public dataset and real artifacts on a private clinical dataset, demonstrating its robustness and generalizability for automatic MR image QA. By efficiently evaluating image quality and identifying artifacts prior to data processing, our approach streamlines QA workflows and enhances the reliability of subsequent analyses in both research and clinical settings.
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Interpretability and Explainable AI
Paper Type: Methodological Development
Registration Requirement: Yes
Reproducibility: https://github.com/shays15/artifact_scoring
Midl Latex Submission Checklist: Ensure no LaTeX errors during compilation., Created a single midl25_NNN.zip file with midl25_NNN.tex, midl25_NNN.bib, all necessary figures and files., Includes \documentclass{midl}, \jmlryear{2025}, \jmlrworkshop, \jmlrvolume, \editors, and correct \bibliography command., Did not override options of the hyperref package, Did not use the times package., All authors and co-authors are correctly listed with proper spelling and avoid Unicode characters., Author and institution details are de-anonymized where needed. All author names, affiliations, and paper title are correctly spelled and capitalized in the biography section., References must use the .bib file. Did not override the bibliographystyle defined in midl.cls. Did not use \begin{thebibliography} directly to insert references., Tables and figures do not overflow margins; avoid using \scalebox; used \resizebox when needed., Included all necessary figures and removed *unused* files in the zip archive., Removed special formatting, visual annotations, and highlights used during rebuttal., All special characters in the paper and .bib file use LaTeX commands (e.g., \'e for é)., Appendices and supplementary material are included in the same PDF after references., Main paper does not exceed 9 pages; acknowledgements, references, and appendix start on page 10 or later.
Latex Code: zip
Copyright Form: pdf
Submission Number: 187
Loading