Improving Retrieval-Based Dialogue Systems: Fine-Grained Post-training Prompt Adaptation and Pairwise Optimization Fine-Tuning Strategy
Abstract: Pre-trained models have demonstrated robust performance in natural language processing tasks. In retrieval-based dialogue systems, the majority of existing studies have reduced the multi-round dialogue response selection problem to a classification problem. While such approaches have proven effective in retrieval-based dialogue systems, they have not fully exploited the rich contextual understanding of pre-trained models and have been unable to effectively deal with complex contexts and semantic relations in multi-turn dialogues, which may result in potential information loss and performance bottlenecks. This paper proposes a fine-grained post-training prompt adaptation method and pairwise optimization fine-tuning strategy (FPPP). During training, the model’s contextual understanding and logical reasoning ability are enhanced through the use of a fine-grained post-training prompt adaptation method. In the prompt-tuning phase, a pairwise optimization fine-tuning strategy is employed to improve the model’s ability to effectively discriminate between positive and negative samples. In all three datasets, FPPP outperforms the baseline model, resulting in an improvement of the R10@1 metric by 0.1%, 1.4%, and 3.6%, respectively. The experimental results not only confirm the effectiveness of our method, but also provide a new approach for retrieval-based dialogue systems.
Loading