Screener: Learning Conditional Distribution of Dense Self-supervised Representations for Unsupervised Pathology Segmentation in 3D Medical Images
Keywords: Unsupervised visual anomaly detection, self-supervised learning, density estimation, medical image analysis
TL;DR: We propose a novel framework for fully self-supervised visual anomaly segmentation outperforming existing methods on unsupervised pathology detection in 3D medical images
Abstract: Accurate and automated anomaly segmentation is critical for assisting clinicians in detecting and diagnosing pathological conditions, particularly in large-scale medical imaging datasets where manual annotation is not only time- and resource-intensive but also prone to inconsistency. To address these challenges, we propose Screener, a fully self-supervised framework for visual anomaly segmentation, leveraging self-supervised representation learning to eliminate the need for manual labels. Additionally, we model the conditional distribution of local image patterns given their global context, enabling the identification of anomalies as patterns with low conditional probabilities and assigning them high anomaly scores.
Screener comprises three components: a descriptor model that encodes local image patterns into self-supervised representations invariant to local-content-preserving augmentations; a condition model that captures global contextual information through invariance to image masking; and a density model that estimates the conditional density of descriptors given their global contexts to compute anomaly scores.
We validate Screener by training a fully self-supervised model on over 30,000 3D CT images and evaluating its performance on four large-scale test datasets comprising 1,820 3D CT scans across four chest and abdominal pathologies. Our framework consistently outperforms existing unsupervised anomaly segmentation methods. Code and pre-trained models will be made publicly available.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13755
Loading