Abstract: Joint Relational Triple Extraction (RTE) is an important task in the field of information extraction. With the development of the pre-trained language models, the sequence-to-sequence (seq2seq) approaches have become one of the promising methods for this task, utilizing a predefined template to convert the relational triples into a structure target sequence, which can be easily decoded as relational triples. However, most existing seq2seq studies focus on the improvement of methods but ignore that template styles also have impacts on performance. Inspired by this idea, we first explore the effects of different template styles on performance and find that some template styles can help generate models to achieve better performance. Based on the above findings, we argue that different template styles lead to various understandings of the relation triple. Therefore, we propose Regularized template style (R-TES) to improve the performance of a main template by reducing the gap between it and other selected templates. Specifically, R-TES uses the pre-trained language model to select the templates with kullback-leibler (KL) divergence. Then, we further reduce the gap between the main template and these selected templates by minimizing KL divergence. Experimental results show that our method outperforms state-of-the-art methods on the publicly available dataset.
Loading