Probabilistic Modeling of Structure in Science: Statistical Physics to Recommender Systems.

06 May 2021OpenReview Archive Direct UploadReaders: Everyone
Abstract: Applied machine learning relies on translating the structure of a problem into a computational model. This arises in applications as diverse as statistical physics and food recommender systems. The pattern of connectivity in an undirected graphical model or the fact that datapoints in food recommendation are unordered collections of features can inform the structure of a model. First, consider undirected graphical models from statistical physics like the ubiquitous Ising model. Basic research in physics requires scalable simulations for comparing the behavior of a model to its experimental counterpart. The Ising model consists of binary random variables with local connectivity; interactions between neighboring nodes can lead to long-range correlations. Modeling these correlations is necessary to capture physical phenomena such as phase transitions. To mirror the local structure of these models, we use flowbased convolutional generative models that can capture long-range correlations. Combining flow-based models designed for continuous variables with recent work on hierarchical variational approximations enables the modeling of discrete random variables. Compared to existing variational inference methods, this approach scales to statistical physics models with millions of correlated random variables and uses 100 times fewer parameters. Just as computational choices can be made by considering the structure of an undirected graphical model, model construction itself can be guided by the structure of individual datapoints. Consider a recommendation task where datapoints consist of unordered sets, and the objective is to maximize top-K recall, a common recommendation metric. Simple results show that a classifier with zero worst-case error achieves maximum top-K recall. Further, the unordered structure of the data suggests the use of a permutation-invariant classifier for statistical and computational efficiency. We evaluate such a classifier on human dietary behavior data, where every meal is an unordered collection of ingredients, and find that it outperforms probabilistic matrix factorization methods. Finally, we show that building problem structure into an approximate inference algorithm improves the accuracy of probabilistic modeling methods.
0 Replies

Loading