Knowledge Consistency between Neural Networks and BeyondDownload PDF

Published: 20 Dec 2019, Last Modified: 05 May 2023ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Deep Learning, Interpretability, Convolutional Neural Networks
Abstract: This paper aims to analyze knowledge consistency between pre-trained deep neural networks. We propose a generic definition for knowledge consistency between neural networks at different fuzziness levels. A task-agnostic method is designed to disentangle feature components, which represent the consistent knowledge, from raw intermediate-layer features of each neural network. As a generic tool, our method can be broadly used for different applications. In preliminary experiments, we have used knowledge consistency as a tool to diagnose representations of neural networks. Knowledge consistency provides new insights to explain the success of existing deep-learning techniques, such as knowledge distillation and network compression. More crucially, knowledge consistency can also be used to refine pre-trained networks and boost performance.
Data: [CUB-200-2011](https://paperswithcode.com/dataset/cub-200-2011), [ImageNet](https://paperswithcode.com/dataset/imagenet)
Original Pdf: pdf
7 Replies

Loading