Stop-probability estimates computed on a large corpus improve Unsupervised Dependency ParsingDownload PDF

2013 (modified: 16 Jul 2019)ACL (1) 2013Readers: Everyone
Abstract: Even though the quality of unsupervised dependency parsers grows, they often fail in recognition of very basic dependencies. In this paper, we exploit a prior knowledge of STOP-probabilities (whether a given word has any children in a given direction), which is obtained from a large raw corpus using the reducibility principle. By incorporating this knowledge into Dependency Model with Valence, we managed to considerably outperform the state-of-theart results in terms of average attachment score over 20 treebanks from CoNLL 2006 and 2007 shared tasks.
0 Replies

Loading