Annotating Training Data for Conditional Semantic Textual Similarity Measurement using Large Language Models

ACL ARR 2025 May Submission1426 Authors

17 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Semantic similarity between two sentences depends on the aspects considered between those sentences. To study this phenomenon, Deshpande et al. (2023) proposed the Conditional Semantic Textual Similarity (C-STS) task and annotated a human-rated similarity dataset containing pairs of sentences compared under two different conditions. However, Tu et al. (2024) found various annotation issues in this dataset and showed that manually re-annotating a small portion of it leads to more accurate C-STS models. Despite these pioneering efforts, the lack of large and accurately annotated C-STS datasets remains a blocker for making progress on this task as evidenced by the subpar performance of the C-STS models. To address this training data need, we resort to Large Language Models (LLMs) to correct the condition statements and similarity ratings in the original dataset proposed by Deshpande et al. (2023). Our proposed method is able to re-annotate a large training dataset for the C-STS task with minimal manual effort. Importantly, by training a supervised C-STS model on our re-annotated training data we achieve a novel state-of-the-art (SoTA) for C-STS, thereby validating the accuracy of our dataset. The re-annotated dataset is submitted anonymously to ARR and will be publicly released upon paper acceptance to expedite the progress of C-STS research.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: semantic textual similarity, conditional similarity, sentence embeddings, large language models, data annotation
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 1426
Loading