Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision

Chen Liang, Jonathan Berant, Quoc Le, Kenneth Forbus, Ni Lao

Oct 15, 2016 (modified: Oct 15, 2016) NIPS 2016 workshop NAMPI submission readers: everyone
  • Abstract: Extending the success of deep neural networks to high level tasks like natural language understanding and symbolic reasoning requires program induction and learning with weak supervision. Recent neural program induction approaches have either used primitive computation component like Turing machine or differentiable operations and memory trained by backpropagation. In this work, we propose the Manager-Programmer-Computer framework to integrate neural networks with operations and memory that are abstract, scalable and precise but non-differentiable, and a friendly neural computer interface. Specifically, we introduce the Neural Symbolic Machines. It contains a sequence-to-sequence neural "programmer" that takes in natural language input and outputs a program as a sequence of tokens, and a non-differentiable "computer" that is a Lisp interpreter with code assistance using syntax check and denotations of partial programs. This integration enables the model to effectively learn a semantic parser from weak supervision over a large knowledge base. Our model obtained new state-of-the-art performance on WebQuestionsSP, a challenging semantic parsing dataset.
  • TL;DR: We introduce Neural Symbolic Machines that learn to write Lisp programs with weak supervision and obtains new state-of-the-art results on a challenging semantic parsing dataset.
  • Conflicts: None
  • Keywords: Deep learning, Natural language processing, Reinforcement Learning