Conditional Response Augmentation for Dialogue Using Knowledge DistillationDownload PDFOpen Website

2020 (modified: 16 Oct 2022)INTERSPEECH 2020Readers: Everyone
Abstract: This paper studies dialogue response selection task. As state-of-the-arts are neural models requiring a large training set, data augmentation is essential to overcome the sparsity of observational annotation, where one observed response is annotated as gold. In this paper, we propose counterfactual augmentation, of considering whether unobserved utterances would “counterfactually” replace the labelled response, for the given context, and augment only if that is the case. We empirically show that our pipeline improves BERT-based models in two different response selection tasks without incurring annotation overheads.
0 Replies

Loading