Keywords: Anomaly Detection; Fairness
Abstract: Anomaly detection (AD) has been widely studied for decades in many real-world applications, including fraud detection in finance, intrusion detection for cybersecurity, etc. Existing anomaly detection methods struggle in imbalanced group scenarios, where the unprotected group is significantly larger than the protected group. Specifically, fairness-unaware methods achieve high overall performance by misclassifying more protected group examples as anomalies, while fairness-aware methods overcompensate fairness by labeling excessive unprotected group examples as anomalies, sacrificing overall performance. To address these issues, we propose FADIG, a fairness-aware contrastive learning-based anomaly detection method designed for imbalanced groups. FADIG consists of two key modules: (1) an adaptively re-balanced autoencoder module that dynamically adjusts group contributions to balance fairness with performance and (2) a fairness-aware contrastive learning module that maximizes similarity between protected and unprotected groups to ensure fairness. Moreover, we provide a theoretical analysis showing our proposed contrastive learning regularization guarantees group fairness. Extensive experiments across multiple real-world datasets demonstrate the effectiveness and efficiency of FADIG in achieving both accurate and fair anomaly detection.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 12584
Loading