Formulating and Proving the Trend of DNNs Learning Simple ConceptsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: representation complexity, deep neural network
TL;DR: We theoretically prove and empirically verify that DNNs mainly learn simple interactive concepts.
Abstract: This paper theoretically explains the intuition that simple concepts are more likely to be learned by deep neural networks (DNNs) than complex concepts. Beyond empirical studies, our research first specifies an exact definition of the complexity of the concept that boosts the learning difficulty. Specifically, it is proven that the inference logic of a neural network can be represented as a causal graph. In this way, causal patterns in the causal graph can be used to formulate interactive concepts learned by the neural network. Based on such formulation, we explain the reason why simple interactive concepts in the data are more likely to be learned than complex interactive concepts. More crucially, we discover that our research provides a new perspective to explain previous understandings of the conceptual complexity. The code will be released when the paper is accepted.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading