Disentanglement, Visualization and Analysis of Complex Features in DNNsDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Interpretability
Abstract: This paper aims to define, visualize, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method disentangles and visualizes feature components of different complexity orders from the feature. The disentanglement of feature components enables us to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, such analysis helps to improve the performance of DNNs. As a generic method, the feature complexity also provides new insights into existing deep-learning techniques, such as network compression and knowledge distillation. We will release the code when the paper is accepted.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We define, visualize and analyze the feature complexity based on the complexity of nonlinear operations, which helps improve the performance of DNNs and understand the existing techniques.
Reviewed Version (pdf): https://openreview.net/references/pdf?id=3VZ0nLDUev
5 Replies

Loading