Graphon-Based Information Bottleneck Analysis of Neural Networks via Stochastic Block Models

NeurIPS 2025 Workshop NeurReps Submission92 Authors

30 Aug 2025 (modified: 29 Oct 2025)Submitted to NeurReps 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Information Bottleneck, Graph Limits, Graphons, Neural Networks, AI Explainability, Information Theory, Neural Representations.
TL;DR: We propose a SBM graphon-based framework for analyzing large multilayer perceptrons (MLPs) through the Information Bottleneck (IB) principle.
Abstract: Deep neural networks are often analyzed through the Information Bottleneck (IB) principle, which formalizes a trade-off between compressing inputs and preserving information relevant for predicting the target variable. Although conceptually appealing, directly estimating mutual information in large architectures is computationally challenging. We propose a graphon-based approach that approximates multilayer perceptrons by fitting weighted stochastic block models (WSBMs) to their weight matrices. The resulting SBM graphons capture the modular structure that emerges during training and enable tractable block-level estimates of $I(X;T)$ and $I(T;Y)$. Our analysis yields preliminary results toward more interpretable IB planes and introduces block-to-block information flow maps, which qualitatively align with classical IB theory. This framework connects graph limit theory with neural network interpretability, offering a scalable geometric abstraction for analyzing information flow in deep models.
Submission Number: 92
Loading