A Cueing Strategy for Prompt Tuning in Relation ExtractionDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Traditional relation extraction models predict confidence scores for each relation type based on a condensed sentence representation. In prompt tuning, prompt templates is used to tune pretrained language models (PLMs), which outputs relation types as verbalized type tokens. This strategy shows great potential to support relation extraction because it is effective to take full use of rich knowledge in PLMs. However, current prompt tuning models are directly implemented on a raw input. It is weak to encode contextual features and semantic dependencies of a relation instance. In this paper, we designed a cueing strategy which implants task specific cues into the input. It controls the attention of prompt tuning, which enable PLMs to learn task specific contextual features and semantic dependencies of a relation instance. We evaluated our method on two public datasets. Experiments show great improvement. It exceeds state-of-the-art performance by more than 4.8% and 1.4% in terms of F1-score on the SemEval corpus and the ReTACRED corpus.
0 Replies

Loading