What is image captioning made of?Download PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We hypothesize that end-to-end neural image captioning systems work seemingly well because they exploit and learn ‘distributional similarity’ in a multimodal feature space, by mapping a test image to similar training images in this space and generating a caption from the same space. To validate our hypothesis, we focus on the ‘image’ side of image captioning, and vary the input image representation but keep the RNN text generation model of a CNN-RNN constant. We propose a sparse bag-of-objects vector as an interpretable representation to investigate our distributional similarity hypothesis. We found that image captioning models (i) are capable of separating structure from noisy input representations; (ii) experience virtually no significant performance loss when a high dimensional representation is compressed to a lower dimensional space; (iii) cluster images with similar visual and linguistic information together; (iv) are heavily reliant on test sets with a similar distribution as the training set; (v) repeatedly generate the same captions by matching images and ‘retrieving’ a caption in the joint visual-textual space. Our experiments all point to one fact: that our distributional similarity hypothesis holds. We conclude that, regardless of the image representation, image captioning systems seem to match images and generate captions in a learned joint image-text semantic subspace.
TL;DR: This paper presents an empirical analysis on the role of different types of image representations and probes the properties of these representations for the task of image captioning.
Keywords: image captioning, representation learning, interpretability, rnn, multimodal, vision to language
Code: [![github](/images/github_icon.svg) anonymousiclr/HJNGGmZ0Z](https://github.com/anonymousiclr/HJNGGmZ0Z)
Data: [Flickr30k](https://paperswithcode.com/dataset/flickr30k), [Places](https://paperswithcode.com/dataset/places), [Places205](https://paperswithcode.com/dataset/places205)
14 Replies

Loading