Abstract: Slot filling is a demanding task of knowledge base population, which aims to extract facts about particular entities from unstructured text automatically. Most of the existing approaches rely on pre-trained extraction models which may suffer from robustness caused by unseen slots, or the so-called zero-shot slot filling problem. Recent studies try to reduce the slot filling to a machine reading comprehension task and achieve certain improvements on unseen slots, but they still face challenges to generate appropriate questions for models and find the right answers. In this paper, we propose a novel end-to-end approach to address the zero-shot slot filling by unifying the natural language question generation and machine reading comprehension. Especially, we explore how to learn a well-organized latent question representation by incorporating external knowledge. We conduct extensive experiments to validate the effectiveness of our model. Experimental results show that the proposed approach outperforms the state-of-the-art baseline methods in zero-shot scenarios.
0 Replies
Loading