Dependency Parsing is More Parameter-Efficient with Normalization

Published: 26 Jun 2025, Last Modified: 15 Jul 2025MLoG-GenAI@KDD PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: semantic dependency parsing, biaffine attention, normalization, graph neural networks
Abstract: Dependency parsing is the task of inferring natural language structure, often approached by modeling word interactions via attention through biaffine scoring. This mechanism works like self-attention in Transformers, where scores are calculated for every pair of words in a sentence. However, unlike Transformer attention, biaffine scoring does not use normalization prior to taking the softmax of the scores. In this paper, we provide theoretical evidence and empirical results revealing that a lack of normalization necessarily results in overparameterized parser models, where the extra parameters compensate for the sharp softmax outputs produced by high variance inputs to the biaffine scoring function. We argue that biaffine scoring can be made substantially more efficient by performing score normalization. We conduct experiments on six datasets for semantic and syntactic dependency parsing using a one-hop parser and a multi-hop GNN parser. We train $N$-layer stacked BiLSTMs and evaluate the parser's performance with and without normalizing biaffine scores. Normalizing allows us to achieve state-of-the-art performance with fewer samples and trainable parameters. Code: https://anonymous.4open.science/r/EfficientSDP-7A93
Submission Number: 18
Loading