Improving Out-of-Distribution Anomaly Detection with Domain-Invariant Latent Representations

TMLR Paper2880 Authors

17 Jun 2024 (modified: 27 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Domain generalization focuses on leveraging the knowledge from the training data of multiple related domains to enhance inference on unseen in-distribution (IN) and out-of-distribution (OOD) domains. In our study, we introduce a multi-task representation learning technique that leverages the knowledge of multiple related domains to improve the detection of classes from unseen domains. Our method aims to cultivate a latent space from data spanning multiple domains, encompassing both source and cross-domains, to amplify generalization to OOD domains. Additionally, we attempt to disentangle the latent space by minimizing the mutual information between the input and the latent space, effectively de-correlating spurious correlations among the samples of a specific domain. Collectively, the joint optimization will facilitate domain-invariant feature learning. Using principles of domain generalization, we try to develop a robust anomaly detection model that can accurately identify anomalies even when those anomalies come from a distribution different from the training data. We assess the model’s efficacy across multiple cybersecurity datasets, using standard classification metrics on both unseen IN and OOD sets, and validate the results with contemporary domain generalization methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Weijian_Deng1
Submission Number: 2880
Loading