Learning Contrastive Representations for Dense Passage Retrieval in Open-Domain Conversational Question Answering

Published: 01 Jan 2024, Last Modified: 21 May 2025WISE (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent research on the task of Conversational Question Answering (ConvQA) emphasizes the role of open-retrieval in a multi-turn interaction setting consisting of a retriever-reader pipeline, wherein the former focuses on selecting relevant passages from a large collection, and the latter is required to resolve the contextual dependency to understand the question and predict the accurate answer. This open-domain ConvQA (OD-ConvQA) setting relies heavily on the correct retrieval of the passages, otherwise, the error propagated from the retriever module can make the reader vulnerable, thereby, resulting in the model’s performance degradation. The existing approaches based on the retriever-reader pipeline in OD-ConvQA utilize the entire conversational context to retrieve the passages. This retrieval, however, results in the selection of irrelevant passages, which subsequently reduces the model’s overall performance. To address the limitation, this work proposes an approach, called Dense Passage Retrieval in Conversational Question Answering (DPR-ConvQA), that utilizes carefully curated history turns to improve the dense passage retrieval, helping the selection of more accurate answers. Our approach solves two key challenges. First, it allows the filtration of irrelevant context from the input that limits the retrieval of entirely unrelated passages from the huge collection. Second, the model utilizes dense passage retrieval, based on contrastive representation learning, which minimizes the distance between positive samples and maximizes the distance between negative ones, providing better passage representation. We validate our proposed model on two popular OD-ConvQA datasets, called OR-QuAC and TopiOCQA. The experimental result shows that our proposed method outperforms the traditional baselines methods and complements the reader-retriever setup.
Loading