Abstract: We address the task of detecting foiled image
captions, i.e. identifying whether a caption
contains a word that has been deliberately
replaced by a semantically similar word,
thus rendering it inaccurate with respect to the
image being described. Solving this problem
should in principle require a fine-grained understanding
of images to detect linguistically
valid perturbations in captions. In such contexts,
encoding sufficiently descriptive image
information becomes a key challenge. In this
paper, we demonstrate that it is possible to
solve this task using simple, interpretable yet
powerful representations based on explicit object
information. Our models achieve stateof-the-art
performance on a standard dataset,
with scores exceeding those achieved by humans
on the task. We also measure the upperbound
performance of our models using gold
standard annotations. Our analysis reveals that
the simpler model performs well even without
image information, suggesting that the dataset
contains strong linguistic bias.
0 Replies
Loading