Duality of Information Flow: Insights in Graphical Models and Neural Networks

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Bayesian neural network, Probabilistic graphical models, Message-passing algorithm, Langevin dynamics, Fokker-Planck dynamics
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Discovering deep connections between probabilistic graphical models and neural networks, revealing their equivalence and enhancing modeling insights.
Abstract: This research highlights the convergence of probabilistic graphical models and neural networks, shedding light on their inherent similarities and interactions. By interpreting Bayesian neural networks within the framework of Markov random fields, we uncovered deep connections between message passing and neural network propagation. Our exploration unveiled a striking equivalence between gradients in neural networks and posterior-prior differences in graphical models. Empirical evaluations across diverse scenarios and datasets showcased the efficacy and generalizability of our approach. This work introduces a novel perspective on Bayesian Neural Networks and probabilistic graphical models, offering insights that could pave the way for enhanced models and a deeper understanding of their relationship.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7497
Loading