Permutation Invariant Training for Paraphrase Identification

Published: 01 Jan 2023, Last Modified: 19 Feb 2025ICASSP 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Identifying sentences sharing similar meanings is crucial to speech and text understandings. Although currently popular cross-encoder solutions with pre-trained language models as backbone have achieved remarkable performance, they suffer from the lack of the permutation invariance or symmetry that is one of the most important inductive biases to such task. To alleviate this issue, in this research we propose a permutation invariant training framework, in which a symmetry regularization is introduced during training that forces the model to produce the same predictions for input sentence pairs in both forward and backward directions. Empirical studies exhibit improved performance over competitive baselines.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview