Keywords: Continual learning, Representation learning
TL;DR: In order to learn more robust representations in continual learning, we got motivation from t-mFV similarity, and adopt it on supervised contrastive loss.
Abstract: Continual learning has been developed using standard supervised contrastive loss from the perspective of feature learning. Due to the data imbalance during the training, there are still challenges in learning better representations. In this work, we suggest using a different similarity metric instead of cosine similarity in supervised contrastive loss in order to learn more robust representations. We validate the our method on one of the image classification datasets Seq-CIFAR-10 and the results outperform recent continual learning baselines.
7 Replies
Loading