Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning
Abstract: Highlights•We are the first to explore the zero-shot language extension for multilingual DST.•The extended DST handles multilingual input and generates a unified dialogue state.•A pre-trained models+multi-auxiliary-tasks fine-tuning method is proposed.•Experiments demonstrate the effectiveness, superiority, and robustness of our method.
Loading