Parsing Natural Language into Propositional and First-Order Logic with Dual Reinforcement LearningDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Semantic parsing converts natural language paraphrases into structured logical expressions. In this paper, we consider two such formal representations: Propositional Logic (PL) and First-order Logic (FOL). Due to the insufficiency of annotated data in this field, we use dual reinforcement learning (RL) to make full use of labeled and unlabeled data. We further propose a brand new reward mechanism to avoid the trouble of manually defining the reward in RL. To utilize the training data efficiently and make the learning process consistent with humans, we integrate curriculum learning into our framework. Experimental results show that the proposed method outperforms competitors on different datasets. In addition to the technical contribution, we construct a Chinese-PL/FOL dataset to make up for the lack of data in this field. We aim to release our code as well as the dataset to aid further research in related tasks.
Paper Type: long
0 Replies

Loading