Are Your Continuous Approximations Really Continuous? Reimagining VI with Bitstring Representations

Published: 19 Mar 2025, Last Modified: 25 Apr 2025AABI 2025 Workshop TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Approximate inference, Quantization, Tractable models, Probabilistic circuits
TL;DR: We turn continuous parameters into discrete bitstrings for probabilistic inference
Abstract: Efficiently performing probabilistic inference in large models is a significant challenge due to the high computational demands and continuous nature of the model parameters. At the same time, the ML community has put effort into quantifying parameters of large-scale models to increase their computational efficiency. We extend this work by proposing a method for learning the probability distributions of quantized parameters via variational inference (VI). This enables effective learning of continuous distributions in a discrete space. We consider both 2D densities and quantized neural networks, where we introduce a tractable learning approach using probabilistic circuits. This method offers a scalable solution to manage complex distributions and provides clear insights into model behavior. We validate our approach in various settings, demonstrating its effectiveness.
Submission Number: 25
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview