Recurrent Chunking Mechanisms for Conversational Machine Reading ComprehensionDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Recurrent Chunking Policy, Machine Reading Comprehension, Reinforcement Learning
Abstract: In this paper, we focus on the conversational machine reading comprehension (MRC) problem, where the input to a model could be a lengthy document and a series of interconnected questions. To deal with long inputs, previous approaches usually chunk them into equally-spaced segments and predict answers based on each chunk independently without considering the information from other chunks. As a result, they may form chunks that fail to cover complete answers or have insufficient contexts around the correct answer required for question answering. Moreover, they are less capable of answering questions that need cross-chunk information. We propose to let a model learn to chunk in a more flexible way via reinforcement learning: a model can decide the next chunk that it wants to process in either reading direction. We also apply recurrent mechanisms to allow information to be transferred between chunks. Experiments on two conversational MRC tasks -- CoQA and QuAC -- demonstrate the effectiveness of our recurrent chunking mechanisms: we can obtain chunks that are more likely to contain complete answers and at the same time provide sufficient contexts around the ground truth answers for better predictions. Specifically, our proposed mechanisms can lead to up to 7.5% improvement in F1 over the baseline when addressing extremely long texts.
Original Pdf: pdf
12 Replies

Loading