A Representation Bottleneck of Bayesian Neural NetworksDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: interpretability, Bayesian neural network
TL;DR: We theoretically prove and empirically verify a representation bottleneck of Bayesian neural networks.
Abstract: Unlike standard deep neural networks (DNNs), Bayesian neural networks (BNNs) formulate network weights as probability distributions, which results in distinctive representation capacities from standard DNNs. In this paper, we explore the representation bottleneck of BNNs from the perspective of conceptual representations. It is proven that the logic of a neural network can be faithfully mimicked by a specific sparse causal graph, where each causal pattern can be considered as a concept encoded by the neural network. Then, we formally define the complexity of concepts, and prove that compared to standard DNNs, it is more difficult for BNNs to encode complex concepts. Extensive experiments verify our theoretical proofs. The code will be released when the paper is accepted.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
4 Replies

Loading