A Dynamic Prompt-tuning Method for Data Augmentation with Associated KnowledgeDownload PDF

01 Mar 2023 (modified: 17 Apr 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: data-to-text, data augmentation, prompt tuning
TL;DR: The proposed DPTAK augmentation method can generate more diverse datasets that can help PLMs get better BLEU scores.
Abstract: Transformer-based pretrained language models (PLMs) have shown to pre-learn rich prior knowledge. To assist data-to-text task, we propose a new dynamic prompt tuning method, DPTAK, to retrieve knowledge from a PLM that is associated with individual data-text pairs. Our method increases the diversity of the training examples without the need to manually collecting and labelling data. When applied on GPT-2, DPTAK outperforms baseline models in several well-studied data-to-text and text-to-data datasets such as E2E, WebNLG, DART.
5 Replies

Loading