Incorporating Graph Information in Transformer-based AMR ParsingDownload PDF

Anonymous

16 Oct 2022 (modified: 05 May 2023)ACL ARR 2022 October Blind SubmissionReaders: Everyone
Keywords: semantic parsing, AMR parsing, AMR, GNN, Transformer adapters, self-knowledge distillation, knowledge distillation
Abstract: Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches employ Transformer-based Autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we explore a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph structural information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment, we can construct a graph to embed structural information using the hidden states through the Encoder. While employing the graph structure constitutes a data leak, we demonstrate how this information leads to a performance gain that can be preserved implicitly via self-knowledge distillation, providing a new State-of-the-Art (SotA) AMR parser, improving over previous ones even without the use of additional data.
Paper Type: long
Research Area: Semantics: Sentence-level Semantics, Textual Inference and Other areas
0 Replies

Loading