Keywords: OOD Detection, Normalizing Flow, Likelihood Paradox of Deep Generative Models
Abstract: Deep generative models that can tractably obtain the likelihood of input data, such as normalizing flows, often assign unexpectedly high likelihood to out-of-distribution (OOD) inputs that were unseen during training. We address this likelihood paradox by manipulating input entropy in a way that reflects semantic similarity, so that OOD samples receive stronger perturbations than in-distribution samples. We provide a theoretical analysis that demonstrates how entropy control increases the expected log-likelihood separation toward the in-distribution, and explain why our procedure works without any additional training of the density model. We then evaluate against likelihood-based OOD detectors on standard benchmarks and find that our method consistently improves AUROC over baselines, supporting the proposed explanation.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 10792
Loading