Relational Graph Attention Networks for Syntax Encoding in Zero-shot Cross-lingual Semantic Role LabelingDownload PDF

Anonymous

17 Feb 2023 (modified: 05 May 2023)ACL ARR 2023 February Blind SubmissionReaders: Everyone
Abstract: Recent models in cross-lingual semantic role labeling (SRL) rely heavily on BiLSTMs, a derivation of RNNs, as their main encoders. However, a previous study in dependency parsing has shown that RNN-based cross-lingual models are ineffective in distant languages. Therefore, we propose graph neural networks (GNNs) built on dependency trees to replace BiLSTMs' role as the encoder for cross-lingual models. We hypothesize that encoding sentences based on their dependency trees helps cross-lingual SRL models achieve better generalization. Through a simple encoder-decoder architecture, we compare various GNNs, i.e., gated graph convolutional networks (GGCNs), graph attention networks (GATs), two-attention relational GATs (2ATT-GATs), and modified self-attention networks from Transformer (SATs). We focus on a zero-shot setting and evaluate the models in 23 languages available in Universal Proposition Bank. The evaluation shows that 2ATT-GATs outperform other GNNs. Moreover, comparisons against BiLSTM-based models show that 2ATT-GATs are more effective for building cross-lingual SRL models, especially in languages with different word orders.
Paper Type: long
Research Area: Semantics: Sentence-level Semantics, Textual Inference and Other areas
0 Replies

Loading