Contrastive Learning-based Sentence Encoders Implicitly Weight Informative Words

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Interpretability, Interactivity, and Analysis of Models for NLP
Submission Track 2: Semantics: Lexical, Sentence level, Document Level, Textual Inference, etc.
Keywords: sentence embedding, contrastive learning, information gain, integrated gradients
TL;DR: We show that contrastive learning-based sentence encoders weight informative words by two information-theoretic quantities: information-gain and self-information through model analysis using Integrated Gradients and SHAP.
Abstract: The performance of sentence encoders can be significantly improved through the simple practice of fine-tuning using contrastive loss. A natural question arises: what characteristics do models acquire during contrastive learning? This paper theoretically and experimentally shows that contrastive-based sentence encoders implicitly weight words based on information-theoretic quantities; that is, more informative words receive greater weight, while others receive less. The theory states that, in the lower bound of the optimal value of the contrastive learning objective, the norm of word embedding reflects the information gain associated with the distribution of surrounding words. We also conduct comprehensive experiments using various models, multiple datasets, two methods to measure the implicit weighting of models (Integrated Gradients and SHAP), and two information-theoretic quantities (information gain and self-information). The results provide empirical evidence that contrastive fine-tuning emphasizes informative words.
Submission Number: 698
Loading