Keywords: Probabilistic Circuits, Autoencoders, Representation Learning, Hybrid Models, Tractable Inference, Missing Data, Differentiable Sampling, Probabilistic Embeddings, Robustness, Deep Generative Models, Knowledge Distillation
TL;DR: We introduce Autoencoding Probabilistic Circuits, a novel hybrid framework leveraging tractable probabilistic circuit encoders and neural decoders to learn explicit probabilistic embeddings end-to-end.
Abstract: Probabilistic Circuits (PCs) enable exact tractable inference, yet their application to representation learning remains underexplored. We introduce Autoencoding Probabilistic Circuits (\methods), a novel framework leveraging PC tractability to explicitly model probabilistic embeddings. \methods jointly model data and latent representations, obtaining embeddings via probabilistic inference using a PC encoder, which is integrated with a neural decoder in an end-to-end trainable hybrid architecture enabled by differentiable sampling. Empirical evaluations demonstrate \methods outperform existing PC-based autoencoding methods in reconstruction quality, generate embeddings competitive with neural autoencoders, and exhibit superior robustness to missing data without requiring imputation. These results establish \methods as a powerful and flexible representation learning method, exploiting PC inference capabilities for robust applications including out-of-distribution detection and knowledge distillation.
Submission Number: 10
Loading