Dialogue Pidgin Text Adaptation via Contrastive Fine-Tuning Download PDF

Published: 08 Apr 2022, Last Modified: 05 May 2023AfricaNLP 2022Readers: Everyone
Keywords: pidgin, language generation, dialogue, multilingual
TL;DR: In this paper, we show that the proposed two-stage training approach can help to adapt the high resource English conversation texts into natural, high-fidelity Pidgin sentences in low resource scenarios.
Abstract: The surging demand for multilingual dialogue systems often requires a costly labeling process for each language addition. For low resource languages, human annotators are continuously tasked with the adaptation of resource-rich language utterances for each new domain. However, this prohibitive and impractical process can often be a bottleneck for low resource languages that are still without proper translation systems nor parallel corpus. In particular, it is difficult to obtain task-specific low resource language annotations for the English-derived creoles (e.g. Nigerian and Cameroonian Pidgin). To address this issue, we utilize the pretrained language models i.e. BART which has shown great potential in language generation/understanding – we propose to finetune the BART model to generate utterances in Pidgin by leveraging the proximity of the source and target languages, and utilizing positive and negative examples in contrastive training objectives. We collected and released the first parallel Pidgin-English conversation corpus in two dialogue domains and showed that this simple and effective technique is sufficient to yield impressive results for English-to-Pidgin generation, which are two closely-related languages.
1 Reply

Loading