Paper Link: https://openreview.net/forum?id=g68eYTS0rzJ
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Question Answering (QA) is a longstanding challenge in natural language processing. Existing QA works mostly focus on specific question types, knowledge domains, or reasoning skills. The specialty in QA research hinders systems from modeling commonalities between tasks and generalization for wider applications. To address this issue, we present ProQA, a unified QA paradigm that solves various tasks through a single model. ProQA takes a unified structural prompt as the bridge and improves the QA-centric ability by structural prompt-based pre-training. Through a structurally designed prompt-based input schema, ProQA concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task. Furthermore, ProQA is pre-trained with structural prompt-formatted large-scale synthesized corpus, which empowers the model with the commonly-required QA ability. Experimental results on 11 QA benchmarks demonstrate that ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.
Copyright Consent Signature (type Name Or NA If Not Transferrable): Wanjun Zhong
Copyright Consent Name And Address: Sun Yat-Sen University: No. 135, Xingang Xi Road, Guangzhou, 510275, P. R. China
Presentation Mode: This paper will be presented in person in Seattle
0 Replies
Loading