Generalizable Cross-Modality Distillation with Contrastive Learning

18 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: contrastive learning, cross-modality distillation, unsupervised learning, generalization bound
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: A novel generalizable cross-modality contrastive distillation (CMCD) framework is proposed to utilize both positive and negative relationships in paired data and effectively distill generalizable representation.
Abstract: Cross-modality distillation arises as an important topic for data modalities containing limited knowledge such as depth maps and high-quality sketches. Such techniques are of great importance, especially for memory and privacy-restricted scenarios where labeled training data is generally unavailable. To solve the problem, existing label-free methods leverage a few pairwise unlabeled data to distill the knowledge by aligning features or statistics between the source and target modalities. For instance, one typically aims to minimize the L2 distance between the learned features of pairs of samples in the source (e.g. image) and the target (e.g. sketch) modalities. However, these approaches only consider the positive correspondence in paired samples, which is typically limited in quantity, while overlooking the potential information within the negative relationship present in the unpaired data, which is more abundant in cross-modality datasets. To exploit such a negative relationship which plays a vital role in learning discriminative feature representation, we propose a novel framework called generalizable cross-modality contrastive distillation (CMCD), built upon contrastive learning that leverages both positive and negative correspondence, towards a better distillation of generalizable features. Extensive experimental results show that our algorithm outperforms existing algorithms consistently by a margin of 2-3\% across diverse modalities and tasks, covering modalities of image, sketch, depth map, and audio and tasks of recognition and segmentation. Our convergence analysis reveals that the distance between source and target modalities significantly impacts the test error on downstream tasks within the target modality which is also validated by the empirical results.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1422
Loading