Gram regularization for sparse and disentangled representationDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 15 May 2023Pattern Anal. Appl. 2022Readers: Everyone
Abstract: Relationship between samples is often ignored when training neural networks for classification tasks. If properly utilized, such information can bring many benefits for the trained models. On the one hand, neural networks trained ignoring similarities between samples may represent different samples closely even if they belong to different classes, which undermines discrimination abilities of the trained models. On the other hand, regularizing inter-class and intra-class similarities in the feature space during training can effectively disentangle the representation between classes and make the representation sparse. To achieve this, a new regularization method is proposed to penalize positive inter-class similarities and negative intra-class similarities in the feature space. Experimental results show that the proposed method can not only obtain sparse and disentangled representation but also improve the performance of the trained models on many datasets.
0 Replies

Loading