Keywords: LLM, Synthetic Data, Generative Commonsense Reasoning, Natural Language Generation, Diversification
Abstract: Conversational agents are required to respond to their users not only with high quality (i.e. commonsense bearing) responses, but also considering multiple plausible alternative scenarios, reflecting the diversity in their responses. Despite the growing need to train diverse commonsense generators, the progress of this line of work has been significantly hindered by the lack of large-scale high-quality diverse commonsense training datasets. Due to the high annotation costs, existing Generative Commonsense Reasoning (GCR) datasets are created using a small number of human annotators, covering only a narrow set of commonsense scenarios. To address this training resource gap, we propose a two-stage method to create the first-ever synthetic dataset CommonSyn for diversified (GCR). The model fine-tuned on our synthetic data jointly increase both generation diversity and quality compared with vanilla models and the model fine-tuned on human-crafted dataset across different size Large Language Models (LLMs)
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: NLP datasets,automatic evaluation of datasets,corpus creation
Contribution Types: Data resources
Languages Studied: English
Submission Number: 2419
Loading