Learning Less-correlated Features in Network AggregationDownload PDF

25 Sept 2022 (modified: 18 Aug 2023)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: This paper proposes a novel learning method to leverage multiple representations effectively. Aggregated features after individual training or after going through extra complicated heads are prone to cause redundancy in the feature space. Instead, we explicitly push the representations to be less correlated during training. Specifically, networks learn different representations of the target task with lowered redundancy by explicitly training with the proposed decorrelation loss. Furthermore, we propose a new network architecture consisting of lightweight sub-networks, which is turned out to be efficient yet has high capability compared with the prior arts using heavy head architectures. It collaborates with the proposed learning method to learn more less-correlated features. We additionally provide an analysis to reveal the relationship between the less-correlated features and performance. Finally, our proposed model outperforms recent state-of-the-art models with higher throughput evaluated on ImageNet. We believe the resultant model is a positive byproduct of the collaboration of less-correlated feature learning with the efficient architecture design. Our code will be publicly released.
0 Replies

Loading