Keywords: dependency parsing, constrained decoding, finite state machine, structured prediction
Abstract: While large language models (LLMs) have set new benchmarks for dependency parsing, achieving these gains typically requires fine-tuning billions of parameters, making such approaches impractical in resource-constrained environments. For structured prediction tasks, we show that explicit structural constraints can effectively compensate for limited model capacity. We propose a lightweight parsing framework that combines compact encoder-decoder models (with millions rather than billions of parameters) and a task-specific finite-state machine (FSM) to guide decoding. By decomposing parsing into a pipelined prediction of heads and relations, our method enforces structural validity and reduces error propagation. Experiments on Universal Dependencies (UD) demonstrate that our approach achieves competitive accuracy with decoder-only models over $100\times$ larger. These results suggest that incorporating domain-specific structural constraints offers an efficient alternative to indiscriminate model scaling for dependency parsing.
Paper Type: Short
Research Area: Hierarchical Structure Prediction, Syntax, and Parsing
Research Area Keywords: Syntax: Tagging, Chunking and Parsing, Machine Learning for NLP, Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English, German, Czech, Korean, Chinese
Submission Number: 10464
Loading