Zero-shot Cross-lingual Conversational Semantic Role LabelingDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: While conversational semantic role labeling (CSRL) has shown its usefulness on Chinese conversational tasks, it is still under-explored in non-Chinese languages due to the lack of multilingual CSRL annotations for the parser training. To avoid expensive data collection and error-propagation of translation-based methods, we present a simple but effective approach to perform zero-shot cross-lingual CSRL. Our model implicitly learns language-agnostic, conversational structure-aware and semantically rich representations with the hierarchical encoders and elaborately designed pre-training objectives. Experimental results show that our cross-lingual model not only outperforms baselines by large margins but it is also robust to low-resource scenarios. More importantly, we confirm the usefulness of CSRL to English conversational tasks such as question-in-context rewriting and multi-turn dialogue response generation by incorporating the CSRL information into the downstream conversation-based models. We believe this finding is significant and will facilitate the research of English dialogue tasks which suffer the problems of ellipsis and anaphora.
0 Replies

Loading