Low-resource Data-to-Text Generation Using Pretrained Language ModelsDownload PDF

Anonymous

16 Feb 2022 (modified: 05 May 2023)ACL ARR 2022 February Blind SubmissionReaders: Everyone
Abstract: Expressing natural language descriptions of structured facts or relations -- data-to-text generation -- increases the accessibility of a diverse range of structured knowledge repositories. End-to-end neural models for this task require a large training corpus of relations and corresponding descriptions. While such resources are unrealistic for every domain, we do not fully understand how well different data-to-text generation models can generalize to new relations. This work presents an analysis of data-to-text models for unseen relations based on two pre-trained language models (PLMs): T5 and GPT-2. We consider different strategies, including few-shot learning, prompt-tuning, and incorporating other domain knowledge (natural language description of the unseen relations) to identify effective strategies and remaining challenges for improving the performance of PLMs on new relations.
Paper Type: long
0 Replies

Loading