A Cueing Strategy with Prompt Tuning for Relation ExtractionDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Prompt tuning shows great potential to support relation extraction because it is effective to take full use of rich knowledge in pretrained language models (PLMs). However, current prompt tuning models are directly implemented on a raw input. It is weak to encode semantic dependencies of a relation instance. In this paper, we designed a cueing strategy which implants task specific cues into the input. It enables PLMs to learn task specific contextual features and semantic dependencies in a relation instance. Experiments on ReTACRED corpus and ACE 2005 corpus show state-of-the-art performance in terms of F1-score.
Paper Type: short
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview