Abstract: Contrastive learning has significantly advanced research on enhancing data utilization and improving representation learning. Supervised contrastive learning has demonstrated the benefits of incorporating label information into the learning process. Building upon this foundation, we propose Semantic-Enhanced Supervised Contrastive Learning of Representation (SECLR), which not only leverages label information but also incorporates conceptual semantics to guide the selection of positive and negative samples. Our approach also diverges from traditional supervised contrastive learning by introducing semantic similarity scores as additional weights in the loss function design. This allows us to better distinguish the degrees of positive and negative relationships. We validate the performance of SECLR on benchmark datasets, including Imagenet, VGGSound, and Imagenet-100. Our results show a significant performance boost. Furthermore, we conduct a detailed analysis of SECLR using different configurations.
Loading