CQARE: Contrastive Question-Answering for Few-shot Relation Extraction with Prompt TuningDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Prompt tuning with pre-trained language models (PLM) has exhibited outstanding performance by closing the gap between pre-training tasks and various downstream applications, without the need for uninitialized parameters to be introduced. However, prompt tuning requires vast amounts of prompt engineering and predefined label word mapping, which obstructs its implements in practice. Besides, the ample label space makes prompt tuning more arduous and challenging when it comes to relation extraction (RE). To tackle these issues, we propose a Contrastive Question-Answering method with prompt tuning for few-shot RE (CQARE). CQARE carries out a RE task-specific pre-training with four entity-relation-aware pre-training objects, including a prompt pre-training to automatically generate continuous prompts. The proposed pre-training can provide more robust initialization with prompt tuning while maintaining semantic consistency with the proposed PLM. Furthermore, CQARE can effectively avoid label words mapping by reformulating RE as contrastive question answering. The results indicate CQARE raising averaged accuracy of 5.11\% on a cross-domain few-shot dataset, demonstrating that robust initialization is crucial for prompt tuning and effective contrastive question answering.
0 Replies

Loading