DemoNSF: A Multi-task Demonstration-based Generative Framework for Noisy Slot Filling Task

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Dialogue and Interactive Systems
Submission Track 2: Natural Language Generation
Keywords: Noisy Slot Filling, Input Perturbations, Multi-task Learning, Generative Framework, Large Language Model, Demonstration Learning
TL;DR: In this paper, we propose a multi-task demonstration-based generative framework for noisy slot filling task, named DemoNSF, which includes three noisy auxiliary tasks and a novel noisy demonstration retrievel strategy.
Abstract: Recently, prompt-based generative frameworks have shown impressive capabilities in sequence labeling tasks. However, in practical dialogue scenarios, relying solely on simplistic templates and traditional corpora presents a challenge for these methods in generalizing to unknown input perturbations. To address this gap, we propose a multi-task demonstration-based generative framework for noisy slot filling, named DemoNSF. Specifically, we introduce three noisy auxiliary tasks, namely noisy recovery (NR), random mask (RM), and hybrid discrimination (HD), to implicitly capture semantic structural information of input perturbations at different granularities. In the downstream main task, we design a noisy demonstration construction strategy for the generative framework, which explicitly incorporates task-specific information and perturbed distribution during training and inference. Experiments on two benchmarks demonstrate that DemoNSF outperforms all baseline methods and achieves strong generalization. Further analysis provides empirical guidance for the practical application of generative frameworks. Our code is released at https://github.com/dongguanting/Demo-NSF.
Submission Number: 450
Loading