The OOD Blind Spot of Unsupervised Anomaly DetectionDownload PDF

Published: 31 Mar 2021, Last Modified: 16 May 2023MIDL 2021Readers: Everyone
Keywords: Unsupervised Lesion Detection, Out-of-Distribution Detection
TL;DR: We investigate on the vulnerability of unsupervised lesion detection frameworks to domain-shifted (OOD) data and the disentanglement of OOD detection and anomaly detection.
Abstract: Deep unsupervised generative models are regarded as a promising alternative to supervised counterparts in the field of MRI-based lesion detection. They denote a principled approach for detecting unseen types of anomalies without relying on large amounts of expensive ground truth annotations. To this end, deep generative models are trained exclusively on data from healthy patients and detect lesions as Out-of-Distribution (OOD) data at test time (i.e. low likelihood). While this is a promising way of bypassing the need for costly annotations, this work demonstrates that it also renders this widely used unsupervised anomaly detection approach particularly vulnerable to non-lesion-based OOD data (e.g. data from different sensors). Since models are likely to be exposed to such OOD data in production, it is crucial to employ safety mechanisms to filter for such samples and run inference only on input for which the model is able to provide reliable results. We first show extensively that conventional, unsupervised anomaly detection mechanisms fail when being presented with true OOD data. Secondly, we apply prior knowledge to disentangle lesion-based OOD from their non-lesion-based counterparts.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: validation/application paper
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Interpretability and Explainable AI
Source Code Url: https://github.com/matthaeusheer/uncertify
Source Latex: zip
6 Replies

Loading