SimSCR: A Simple Supervised Contrastive Learning Framework for Response Selection of Dialogue SystemsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Supervised contrastive learning has shown impressive performance across multiple NLP tasks, enhancing model generalization by shortening the distance between semantic representations of samples in the same category and increasing the distance between those of different categories. For the task of response selection, directly calculating the similarity between context and response may lead to suboptimal model performance due to insufficient attention mechanism interaction, as compared to traditional full attention methods. To address this issue, we propose an innovative interactive supervised contrastive learning framework that transforms the problem of response selection from classification into a matching issue by introducing a special response named anchor response during training, effectively applying contrastive learning to this task. This framework not only combines the advantages of deep context interaction found in traditional methods but also leverages the strong generalization capability of contrastive learning. Additionally, we introduce a heuristic method for hard negative responses sampling, which significantly reduces the need for large numbers of negative samples in contrastive learning. Applying our method, the results obtained on three publicly available response selection datasets have reached the current state-of-the-art level.
Paper Type: long
Research Area: Dialogue and Interactive Systems
Languages Studied: English,Chinese
0 Replies

Loading