Abstract: While conversational semantic role labeling (CSRL) has shown its usefulness on Chinese conversational tasks, it is still under-explored in non-Chinese languages due to the lack of multilingual CSRL annotations for the parser training. To avoid expensive data collection and error-propagation of translation-based methods, we present a simple but effective approach to perform zero-shot cross-lingual CSRL. Our model implicitly learns language-agnostic, conversational structure-aware and semantically rich representations with the hierarchical encoders and elaborately designed pre-training objectives. Through comprehensive experiments, we find that, our cross-lingual model not only outperforms baselines by large margins but it is also robust to low-resource scenarios. More impressively, we attempt to use CSRL information to help downstream English conversational tasks, including question-in-context rewriting and multi-turn dialogue response generation. Although we have obtained competitive performance on these tasks without CSRL information, substantial improvements are further achieved after introducing CSRL information, which indicates the effectiveness of our cross-lingual CSRL model and the usefulness of CSRL to English dialogue tasks.
One-sentence Summary: we present a simple but effective approach to perform zero-shot cross-lingual CSRL and further confirm the usefulness of CSRL to non-Chinese dialogue tasks.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2204.04914/code)
5 Replies
Loading