Abstract: We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning, which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
Paper Link: https://openreview.net/forum?id=inHtN6WEgYP
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Copyright Consent Signature (type Name Or NA If Not Transferrable): Yung-Sung Chuang
Copyright Consent Name And Address: Massachusetts Institute of Technology. 77 Massachusetts Ave, Cambridge, MA 02139, USA
Presentation Mode: This paper will be presented in person in Seattle