Keywords: invertible neural networks, one class classification, anomaly detection
TL;DR: Using invertible neural networks to map inlier data to a compact latent distribution, enabling effective anomaly detection.
Abstract: This work presents a novel approach to the one-class classification problem by leveraging invertible neural networks (INNs). Our method, "Invertible One-Class Classification" (IOCN), maps the data distribution to a compact latent distribution, specifically a uniform distribution on a hypercube. In contrast to the usual latent Gaussian, the uniform distribution defines a clear boundary between inliers and outliers and thus facilitates outlier detection by simply measuring the signed distance to the boundary. To train our mapping, we propose a novel objective function and prove that its optimum is the transport from the data distribution to the uniform distribution in the latent hypercube. Interestingly, this objective is simpler than the traditional maximum likelihood training because it does not require the flow's Jacobian determinant. Experiments demonstrate we outperform standard normalizing flows in outlier detection performance and match the state of the art.
Submission Number: 4
Loading