Towards the Difficulty for a Deep Neural Network to Learn Concepts of Different Complexities

Published: 21 Sept 2023, Last Modified: 24 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: representation complexity, deep learning
TL;DR: This paper theoretically explains the underline mechanism that makes simple concepts more likely to be learned by DNNs.
Abstract: This paper theoretically explains the intuition that simple concepts are more likely to be learned by deep neural networks (DNNs) than complex concepts. In fact, recent studies have observed [24, 15] and proved [26] the emergence of interactive concepts in a DNN, i.e., it is proven that a DNN usually only encodes a small number of interactive concepts, and can be considered to use their interaction effects to compute inference scores. Each interactive concept is encoded by the DNN to represent the collaboration between a set of input variables. Therefore, in this study, we aim to theoretically explain that interactive concepts involving more input variables (i.e., more complex concepts) are more difficult to learn. Our finding clarifies the exact conceptual complexity that boosts the learning difficulty.
Supplementary Material: pdf
Submission Number: 963
Loading