Context-Aware Ranking Approaches for Search-based Query RewritingDownload PDF

Anonymous

04 Mar 2022 (modified: 05 May 2023)Submitted to NLP for ConvAIReaders: Everyone
Keywords: learning-to-rank, query rewriting
Abstract: Query rewriting (QR) is an increasingly important technique for reducing user friction in large-scale conversational AI agents. Recently, the search based query rewriting system has been proven effective and achieved promising results. It is a multi-stage system that consists of two components orderly: retrieval and ranking. Specifically, given a query, a dual-encoder model retrieves top N rewrite candidates. Then a Gradient Boosted Decision Trees (GBDT) re-ranks the candidates by considering semantic and information retrieval (IR) features. However, although there is still a debate for the effectiveness of the neural ranking model on traditional Learning-to-Rank (LTR) problems, the neural LTR models for the QR task have not been explored. To this end, we first explore preliminary ranking models, including both tree-based (e.g., LambdaMART) and neural-based (e.g., point-wise, list-wise) ranking models. Furthermore, we propose a context-aware ranking approach by integrating the dialog context information into the ranking models. Experimental results demonstrate that the proposed context-aware ranking model outperforms the baselines significantly.
0 Replies

Loading