Abstract: Unsupervised out-of-distribution (OOD) detection is critical for the safe deployment of machine learning systems, yet standard likelihood-based methods using deep generative models (DGMs) often fail, assigning deceptively high likelihoods to anomalous data. We attribute this failure, particularly within Variational Autoencoders (VAEs), to a phenomenon we term likelihood cancellation: informative signals from the model’s encoder and decoder can neutralize each other within the final scalar likelihood. To overcome this, we introduce the Likelihood Path (LPath) Principle, a new framework that extracts a robust OOD signal from the entire computational path of a VAE. We operationalize this principle by reinterpreting VAEs through the lens of fast and slow weights, enabling online, instance-wise inference without costly retraining. Our method extracts minimal sufficient statistics from the VAE’s inference path and feeds them into a classical density estimator. On standard benchmarks (CIFAR-10, SVHN, CIFAR-100), our LPath method achieves state-of-the-art OOD detection, outperforming models with over 10x the parameters. Our lightweight 3M-parameter VAE provides a highly efficient and principled solution for real-world, streaming OOD detection.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Wesley_Maddox1
Submission Number: 6499
Loading