Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Probabilistic Graphical Models, Robust and Reliable ML, Distribution Shift
Abstract: Tractable probabilistic models (TPMs) have attracted substantial research interest in recent years, particularly because of their ability to answer various reasoning queries in polynomial time. In this study, we focus on the distributionally robust learning of continuous TPMs and address the challenge of distribution shift at test time by tackling the adversarial risk minimization problem within the framework of distributionally robust learning. Specifically, we demonstrate that the adversarial risk minimization problem can be efficiently addressed when the model permits exact log-likelihood evaluation and efficient learning on weighted data. Our experimental results on several real-world datasets show that our approach achieves significantly higher log-likelihoods on adversarial test sets. Remarkably, we note that the model learned via distributionally robust learning can achieve higher average log-likelihood on the initial uncorrupted test set at times.
List Of Authors: Hailiang, Dong and James, Amato and Vibhav, Gogate and Nicholas, Ruozzi
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/LeonDong1993/UAI2024-RobustLearning
Submission Number: 455
Loading