Latent Graph Recurrent Network for Document Ranking

Published: 01 Jan 2021, Last Modified: 08 Oct 2024DASFAA (2) 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: BERT based ranking models are emerging for its superior natural language understanding ability. The attention matrix learned through BERT captures all the word relations in the input text. However, neural ranking models focus only on the text matching between query and document. To solve this problem, we propose a graph recurrent neural network based model to refine word representations from BERT for document ranking, referred to as Latent Graph Recurrent Network (LGRe for short). For each query and document pair, word representations are learned through transformer layer. Based on these word representations, we propose masking strategies to construct a bipartite-core word graph to model the matching between the query and document. Word representations will be further refined by graph recurrent neural network to enhance word relations in this graph. The final relevance score is computed from refined word representations through fully connected layers. Moreover, we propose a triangle distance loss function for embedding layers as an auxiliary task to obtain discriminative representations. It is optimized jointly with pairwise ranking loss for ad hoc document ranking task. Experimental results on public benchmark TREC Robust04 and WebTrack2009-12 test collections show that LGRe (The implementation is available at https://github.com/DQ0408/LGRe) outperforms state-of-the-art baselines more than \(2\%\).
Loading