e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Abstract: ecently, there has been an increasing number of efforts
to introduce models capable of generating natural language
explanations (NLEs) for their predictions on vision-language
(VL) tasks. Such models are appealing, because they can pro-
vide human-friendly and comprehensive explanations. How-
ever, there is a lack of comparison between existing methods,
which is due to a lack of re-usable evaluation frameworks
and a scarcity of datasets. In this work, we introduce e-
ViL and e-SNLI-VE. e-ViL is a benchmark for explainable
vision-language tasks that establishes a unified evaluation
framework and provides the first comprehensive comparison
of existing approaches that generate NLEs for VL tasks. It
spans four models and three datasets and both automatic
metrics and human evaluation are used to assess model-
generated explanations. e-SNLI-VE is currently the largest
existing VL dataset with NLEs (over 430k instances). We also
propose a new model that combines UNITER [15], which
learns joint embeddings of images and text, and GPT-2 [38],
a pre-trained language model that is well-suited for text gen-
eration. It surpasses the previous state of the art by a large
margin across all datasets. Code and data are available
here: https://github.com/maximek3/e-ViL
0 Replies
Loading