N-Shot Learning for Augmenting Task-Oriented Dialogue State TrackingDownload PDF

Anonymous

16 Oct 2021 (modified: 05 May 2023)ACL ARR 2021 October Blind SubmissionReaders: Everyone
Abstract: We introduce an augmentation framework that utilizes belief state annotations to match turns from various dialogues and forms new synthetic dialogues in a bottom-up manner. Unlike other augmentation strategies, it operates with as few as five examples. Our augmentation strategy yields significant improvements when both adapting a DST model to a new domain, and when adapting a language model to the DST task, on evaluations with TRADE and TOD-BERT models. Further analysis shows that our model performs better on seen values during training, and it is also more robust to unseen values even though we do not use any external dataset for augmentation. We conclude that exploiting belief state annotations enhances dialogue augmentation and results in improved models in $n$-shot training scenarios.
0 Replies

Loading