LINDA: Unsupervised Learning to Interpolate in Natural Language Processing

TMLR Paper579 Authors

08 Nov 2022 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Despite the success of mixup in data augmentation, its applicability to natural language processing (NLP) tasks has been limited due to the discrete and variable-length nature of natural languages. Recent studies have thus relied on domain-specific heuristics and manually crafted resources, such as dictionaries, in order to apply mixup in NLP. In this paper, we instead propose an unsupervised learning approach to text interpolation for the purpose of data augmentation, to which we refer as `Learning to INterpolate for Data Augmentation' (LINDA), that does not require any heuristics nor manually crafted resources but learns to interpolate between any pair of natural language sentences over a natural language manifold. After empirically demonstrating the LINDA's interpolation capability, we show that LINDA indeed allows us to seamlessly apply mixup in NLP and leads to better generalization in text classification both in-domain and out-of-domain.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: In response to the comments, we have carefully revised the manuscript, including in-correct citation and equation, clear explanation about related work and motivation (Section 1) and tone down our claims (section 5).
Assigned Action Editor: ~Tao_Qin1
Submission Number: 579
Loading