Cross-Lingual Event Detection via Optimized Adversarial TrainingDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=ZdBAJ_wjgIn
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: In this work, we focus on Cross-Lingual Event Detection where a model is trained on data from a $\textit{source}$ language but its performance is evaluated on data from a second, $\textit{target}$, language. Most recent works in this area have harnessed the language-invariant qualities displayed by pre-trained Multi-lingual Language Models. Their performance, however, reveals there is room for improvement as the cross-lingual setting entails particular challenges. We employ Adversarial Language Adaptation to train a Language Discriminator to discern between the source and target languages using unlabeled data. The discriminator is trained in an adversarial manner so that the encoder learns to produce refined, language-invariant representations that lead to improved performance. More importantly, we optimize the adversarial training process by only presenting the discriminator with the most informative samples. We base our intuition about what makes a sample informative on two disparate metrics: sample similarity and event presence. Thus, we propose leveraging Optimal Transport as a solution to naturally combine these two distinct information sources into the selection process. Extensive experiments on 8 different language pairs, using 4 languages from unrelated families, show the flexibility and effectiveness of our model that achieves state-of-the-art results.
Presentation Mode: This paper will be presented in person in Seattle
Copyright Consent Signature (type Name Or NA If Not Transferrable): Luis Fernando Guzman Nateras
Copyright Consent Name And Address: Luis Fernando Guzman Nateras, University of Oregon, 1585 E 13th Ave, Eugene, Oregon, USA
0 Replies

Loading