Heterogeneous Loss Function with Aggressive Rejection for Contaminated data in anomaly detectionDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Anomaly detection, contaminated data, Unsupervised learning
TL;DR: This paper proposes heterogeneous loss with aggressive rejection for contaminated data in anomaly detection.
Abstract: A training clean dataset, which consists of only normal data, is crucial for detecting anomalous data. However, a clean dataset is challenging to produce in practice. Here, heterogeneous loss function with aggressive rejection is proposed, which strengthens robustness against contamination. Aggressive rejection constrains training on the intersection of normal and abnormal distributions to handle the potential anomalies. Heterogeneous loss function utilizes an adaptive mini-batch stochastic choice of an order of asymptotic polynomial of GA loss, which dynamically optimizes the gradient for the intersection further. Through the proposed method, mean square error based models can outperform various robust loss functions and generate comparable performance with robust models for contaminated data on three image datasets.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
Supplementary Material: zip
7 Replies