On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Bayesian neural network (BNN) posteriors are often considered impractical for inference, as symmetries fragment them, non-identifiabilities inflate dimensionality, and weight-space priors are seen as meaningless. In this work, we study how overparametrization and priors together reshape BNN posteriors and derive implications allowing us to better understand their interplay. We show that redundancy introduces three key phenomena that fundamentally reshape the posterior geometry: layer balancedness, weight distribution on equal-probability manifolds, and prior conformity. We validate our findings through extensive experiments with posterior sampling budgets that far exceed those of earlier works, and demonstrate how overparametrization induces structured, prior-aligned weight posterior distributions.
Submission Number: 5
Loading