MoVie: Revisiting Modulated Convolutions for Visual Counting and BeyondDownload PDF

Sep 28, 2020 (edited Feb 10, 2022)ICLR 2021 PosterReaders: Everyone
  • Keywords: visual counting, visual question answering, common object counting, visual reasoning, modulated convolution
  • Abstract: This paper focuses on visual counting, which aims to predict the number of occurrences given a natural image and a query (e.g. a question or a category). Unlike most prior works that use explicit, symbolic models which can be computationally expensive and limited in generalization, we propose a simple and effective alternative by revisiting modulated convolutions that fuse the query and the image locally. Following the design of residual bottleneck, we call our method MoVie, short for Modulated conVolutional bottlenecks. Notably, MoVie reasons implicitly and holistically and only needs a single forward-pass during inference. Nevertheless, MoVie showcases strong performance for counting: 1) advancing the state-of-the-art on counting-specific VQA tasks while being more efficient; 2) outperforming prior-art on difficult benchmarks like COCO for common object counting; 3) helped us secure the first place of 2020 VQA challenge when integrated as a module for ‘number’ related questions in generic VQA models. Finally, we show evidence that modulated convolutions such as MoVie can serve as a general mechanism for reasoning tasks beyond counting.
  • One-sentence Summary: 2020 VQA challenge winner; state-of-the-art performance on three counting benchmarks; can work beyond counting towards general visual reasoning
  • Supplementary Material: zip
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Code: [![github](/images/github_icon.svg) facebookresearch/mmf](
  • Data: [CLEVR](, [COCO](, [GQA](, [ImageNet](, [TallyQA](, [Visual Question Answering v2.0](
9 Replies