Improving Syntactic Parsing with Consistency LearningDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: In this paper, we propose using \emph{consistency learning} to improve constituency and dependency parsing performances on a multi-task setting. It utilizes a consistent constraint between the predictions. While multi-task learning implicitly learns shared representations for multiple sub-tasks, our method introduces an explicit consistency objective, which encourages shared representations that result in consistent predictions. Our intuition is that correct predictions are more likely consistent ones. To introduce consistent constraints, we propose a general method for introducing consistency objectives, as well as other prior knowledge, into existing neural models. This method only requires a boolean function that tells whether or not the multiple predictions are consistent, which does not need to be differentiable. We demonstrate the efficacy of our method by showing that it out-performs a state-of-the-art joint dependency and constituency parser on CTB.
0 Replies

Loading