Abstract: Dependency parsing is a fundamental task in natural language processing (NLP), aiming to identify syntactic dependencies and construct a syntactic tree for a given sentence. Traditional dependency parsing models typically construct embeddings and utilize additional layers for prediction. We propose a novel dependency parsing method that relies solely on an encoder model with a text-to-text training approach. To facilitate this, we introduce a structured prompt template that effectively captures the structural information of dependency trees. Our method achieves the state-of-the-art performance in UAS (97.41) and outperforms most previous approaches in LAS (96.16) on the English Penn Treebank, despite relying solely on a pre-trained model. Furthermore, this method is highly adaptable to various pre-trained models across different target languages and training environments, allowing easy integration of task-specific features.
Paper Type: Long
Research Area: Syntax: Tagging, Chunking and Parsing
Research Area Keywords: dependency parsing
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English, Bulgarian, Catalan, Czech, German, Spanish, French, Italian, Dutch, Norwegian, Romanian, Russian, Korean
Keywords: dependency parsing
Submission Number: 5158
Loading