Implicit MLE: Backpropagating Through Discrete Exponential Family DistributionsDownload PDF

21 May 2021, 20:44 (edited 26 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: discrete-continuous learning, gradient estimation, combinatorial optimization, exponential family distribution
  • TL;DR: Integrating discrete probability distributions and combinatorial optimization problems into neural networks.
  • Abstract: Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. I-MLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. We introduce a novel class of noise distributions for approximating marginals via perturb-and-MAP. Moreover, we show that I-MLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. Experiments on several datasets suggest that I-MLE is competitive with and often outperforms existing approaches which rely on problem-specific relaxations.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/nec-research/tf-imle
12 Replies

Loading