Is Scale All You Need For Anomaly Detection?

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: anomaly detection
TL;DR: A theoretical model and empirical evidence highlighting the trade-off between representation sufficiency and over-expressivity in anomaly detection.
Abstract: Scaling up neural representations has led to an unprecedented boost in anomaly detection methods' performance. This paper tackles the question: can we solve anomaly detection with arbitrary accuracy by continuing to scale up neural representations? We begin by highlighting that overly expressive representations are often unable to detect even simple anomalies when evaluated beyond well-studied object-centric datasets. We explain this phenomenon by introducing a theoretical toy model for anomaly detection performance. The model provides evidence for a no-free-lunch theorem in anomaly detection stating that increasing representation expressivity will eventually result in performance degradation. To break this deadlock, it is necessary to provide guidance to focus the representation on the attributes relevant to the anomalies of interest. We conducted an extensive empirical investigation demonstrating that state-of-the-art representations often suffer from over-expressivity, failing to detect many types of anomalies in practical settings. Our paper underscores that achieving breakthroughs in anomaly detection requires more than just scale; it requires making informed assumptions about the nature of the anomalies.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3775
Loading