Improving Unsupervised Multi-Lingual Dependency Parsing via the Dynamic Feature Alignment

ACL ARR 2026 January Submission8723 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multi-lingual, unsupervised learning, denepndecy parsing, feature alignment, low-resource languages
Abstract: Multi-lingual dependency parsing aims to leverage shared syntactic structures across languages to improve parsing accuracy in low-resource scenarios. However, direct transfer often yields suboptimal performance due to significant linguistic variations across diverse languages. To address this issue, we propose a novel approach for unsupervised multi-lingual dependency parsing via dynamic feature alignment. Specifically, we first construct multilingual aligned dependency treebanks by leveraging the collaborative annotation of multiple Large Language Models (LLMs). Subsequently, we design a dynamic feature alignment network to select beneficial syntactic features and filter out harmful ones automatically. Experiments on multiple benchmark datasets demonstrate that our proposed method significantly outperforms all strong baselines. In-depth comparison experiments confirm that dynamic feature alignment enables the model to adaptively fuse features from multiple high-source languages. Besides, detailed error analysis further validates that our designed feature selection strategy is suitable for dynamic parameter adaptation. Our code and data are available at https://github.com/**.
Paper Type: Long
Research Area: Hierarchical Structure Prediction, Syntax, and Parsing
Research Area Keywords: unsupervised depdency parsing,multi-lingual dependency parsing,feature alignment
Contribution Types: Approaches to low-resource settings, Theory
Languages Studied: Vietnamese,Tamil,Telugu,Maltese
Submission Number: 8723
Loading