Positive Difference Distribution for Image Outlier Detection using Normalizing Flows and Contrastive Data

Published: 26 Apr 2023, Last Modified: 26 Apr 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Detecting test data deviating from training data is a central problem for safe and robust machine learning. Likelihoods learned by a generative model, e.g., a normalizing flow via standard log-likelihood training, perform poorly as an outlier score. We propose to use an unlabelled auxiliary dataset and a probabilistic outlier score for outlier detection. We use a self-supervised feature extractor trained on the auxiliary dataset and train a normalizing flow on the extracted features by maximizing the likelihood on in-distribution data and minimizing the likelihood on the contrastive dataset. We show that this is equivalent to learning the normalized positive difference between the in-distribution and the contrastive feature density. We conduct experiments on benchmark datasets and compare to the likelihood, the likelihood ratio and state-of-the-art anomaly detection methods.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=IHjNxOm6zK
Changes Since Last Submission: In response to the rebuttal discussion, we have made several revisions to our submission to improve the paper's logical flow and highlight the task and approach. We also introduced a new version of our method (CF-FT) that showed significant performance improvement. Here are the changes we made: Revised Title and Introduction: - We changed the title to "Positive Difference Distribution based Image Outlier Detection using Normalizing Flows and Contrastive Data" to emphasize the image-specific setting and align the paper with the literature of (out-of-distribution) outlier detection in contrast to defect detection. - We modified the introduction and contributions to emphasize the task definition and approach with a separate paragraph. Reorganized Method and Experimental Sections to improve the logical flow: - We reorganized the method section and moved the low dimensional toy experiments from the experimental section to the method section to help the reader understand the method. - We reorganized the experimental section by first showing the ablation studies for different mixtures as contrastive distribution and the comparison to the ratio method and the standard normalizing flow. - We added a new findings section after the ablation studies to summarize the results and motivate the choice of IMAGNET as the contrastive distribution for the benchmark experiments. New Method CF-FT: - We added a new method, CF-FT, to the experiments. The method fine-tunes a standard normalizing flow for a few epochs using our objective and shows statically significant state-of-the-art performance on the benchmark experiments. - We included significance tests to conduct statistical comparisons between our methods and the benchmark methods. Appendix and Other Changes: - We moved sections to the appendix to reduce the paper's length: Training pseudocode, contrastive loss clamping, MoCo finetuning, Beyond the Image Domain. - We added motivation and explanation for the MoCo feature extractor. - We redesigned figures for better readability and understanding and used vector graphics. - We made several editorial changes to improve the formulations and readability. We marked all changed or new sections in blue and all moved sections in grey to help readers identify the revisions made.
Assigned Action Editor: ~George_Papamakarios1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 887
Loading