MoVie: Revisiting Modulated Convolutions for Visual Counting and BeyondDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: visual counting, visual question answering, common object counting, visual reasoning, modulated convolution
Abstract: This paper focuses on visual counting, which aims to predict the number of occurrences given a natural image and a query (e.g. a question or a category). Unlike most prior works that use explicit, symbolic models which can be computationally expensive and limited in generalization, we propose a simple and effective alternative by revisiting modulated convolutions that fuse the query and the image locally. Following the design of residual bottleneck, we call our method MoVie, short for Modulated conVolutional bottlenecks. Notably, MoVie reasons implicitly and holistically and only needs a single forward-pass during inference. Nevertheless, MoVie showcases strong performance for counting: 1) advancing the state-of-the-art on counting-specific VQA tasks while being more efficient; 2) outperforming prior-art on difficult benchmarks like COCO for common object counting; 3) helped us secure the first place of 2020 VQA challenge when integrated as a module for ‘number’ related questions in generic VQA models. Finally, we show evidence that modulated convolutions such as MoVie can serve as a general mechanism for reasoning tasks beyond counting.
One-sentence Summary: 2020 VQA challenge winner; state-of-the-art performance on three counting benchmarks; can work beyond counting towards general visual reasoning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) facebookresearch/mmf](https://github.com/facebookresearch/mmf/tree/master/projects/movie_mcan)
Data: [CLEVR](https://paperswithcode.com/dataset/clevr), [COCO](https://paperswithcode.com/dataset/coco), [GQA](https://paperswithcode.com/dataset/gqa), [HowMany-QA](https://paperswithcode.com/dataset/howmany-qa), [ImageNet](https://paperswithcode.com/dataset/imagenet), [TallyQA](https://paperswithcode.com/dataset/tallyqa), [Visual Question Answering v2.0](https://paperswithcode.com/dataset/visual-question-answering-v2-0)
9 Replies

Loading