CLoCE:Contrastive Learning Optimize Continous Prompt Embedding Space in Relation ExtractionDownload PDF

Anonymous

16 Jan 2022 (modified: 19 Aug 2024)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Recent studies have proved that prompt tuning can improve the performance of pre-trained language models (PLMs) on downstream tasks. However, in the task of relation extraction (RE), there are still a large number of confusing samples that hinder prompt-tuning method from achieving higher accuracy. Inspired by previous works, we innovatively utilize contrastive learning to solve this problem. We propose a prompt-tuning-based framework and apply contrastive learning to optimize the representation of input sentences in embedding space. At the same time, we design a more general template for RE task, and further use knowledge injection to improve performance of the model. Through extensive experiments on public datasets, the micro F1-score of our model exceeds the existing SOTA on the Re-TACRED and TACREV datasets by 0.5 and 1.0, respectively. Meanwhile, in the few-shot scenario, our model also has a more robust performance than fine-tune methods.
Paper Type: long
0 Replies

Loading