Hurdle conjugate priors for scalable Tucker decomposition

Published: 17 Jun 2024, Last Modified: 17 Jul 2024ICML2024-AI4Science SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: bayesian, tucker, tensor decomposition, sparsity
Abstract: This paper introduces a novel inference scheme for a class of hurdle priors that exploits sparsity to scale large machine learning models with convolution-closed likelihood distributions, such as the Gaussian and Poisson. We call this the convolution-closed hurdle motif, and focus on the non-negative Tucker decomposition, a tool popular in the literature for modeling multi-way relational data. We apply an instance of the class of hurdle priors, the hurdle gamma prior, to a probabilistic non-negative Tucker method and derive an inference scheme that scales with only the non-zero latent parameters in the core tensor. This scheme avoids the typical exponential blowup in computational cost present in Tucker decomposition, efficiently fitting the data to a high-dimensional latent space. We derive and implement a closed-form Gibbs sampler for full posterior inference and fit our model to longitudinal microbiome data. Using this hurdle motif to quickly train our model, we reveal interpretable qualitative structure and encouraging classification results.
Submission Number: 197
Loading