Visualizing the Emergence of Primitive Interactions During the Training of DNNs

16 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Visualization, Representation Complexity, Neural Network
TL;DR: This study visualizes and investigates how a DNN gradually learns different primitive interactions during the learning process.
Abstract: Although the learning of deep neural networks (DNNs) is widely believed to be a fitting process without an explicit symbolic structure, previous studies have discovered (Ren et al., 2023a; Li & Zhang, 2023b) and proven (Ren et al., 2023c) that well-trained DNNs usually encode sparse interactions, which can be considered as primitives of the inference. In this study, we redefine the interaction on principal feature components in intermediate-layer features, which significantly simplifies the interaction and enables us to explore the dynamics of interactions throughout the learning of the DNN. Specifically, we visualize how new interactions are gradually learned and how previously learned interactions are gradually forgotten during the training process. We categorize all interactions into five distinct groups (reliable, withdrawing, forgetting, betraying, and fluctuating interactions), which provides a novel perspective for understanding the learning process of DNNs.
Primary Area: visualization or interpretation of learned representations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 594
Loading