Progressive Coarse-graining and Deep Neural Networks (DNNs)

ICLR 2026 Conference Submission13301 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Coarse-graining, deep neural networks, rank estimation, linear separability, information bottleneck, Bayesian learning, lottery ticket hypothesis
TL;DR: We provide an overview of various theories about deep neural network function, and forward an overarching summary inspired by Erik Hoel's Causal Emergence 2.0 framework.
Abstract: We try to provide an overarching perspective on some of the research done in the last few years explaining the behaviour of deep neural networks (DNNs) when they are used to complete a variety of classification and prediction tasks. We start by providing an overview of several noteworthy papers on the fundamental properties of DNNs across different architectures and data regimes. We then forward our own integrated perspective of DNNs as progressive coarse-graining systems inspired by Erik Hoel's Causal Emergence 2.0 framework.
Primary Area: interpretability and explainable AI
Submission Number: 13301
Loading