DCN+: Mixed Objective And Deep Residual Coattention for Question Answering

Caiming Xiong, Victor Zhong, Richard Socher

Feb 15, 2018 (modified: Feb 23, 2018) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning, using rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we introduce a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state of the art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.
  • TL;DR: We introduce the DCN+ with deep residual coattention and mixed-objective RL, which achieves state of the art performance on the Stanford Question Answering Dataset.
  • Keywords: question answering, deep learning, natural language processing, reinforcement learning